Send Me the Long Letter
“If I Had More Time, I Would Have Written a Shorter Letter” — and I No Longer Care
There is a quote attributed variously to Pascal, Twain, Churchill, and a rotating cast of dead intellectuals who apparently all struggled with brevity: “If I had more time, I would have written a shorter letter.”
The meaning is clear. Compression takes effort. Distillation is a craft. The person who sends you the tight, clean, five-bullet executive summary has done work on your behalf — they’ve made judgment calls about what matters, they’ve edited out the noise, they’ve respected your time.
We’ve built an entire professional culture around this idea.
One-page resumes. Five-bullet project summaries. Ten-slide pitch decks. Three references. Executive summaries that must fit on a single page or they don’t get read. The discipline of compression has been treated as a proxy for intelligence itself — if you can’t explain it simply, you don’t understand it well enough.
I’ve lived by this. I’ve taught it. I’ve rejected submissions that violated it.
I’m done.
Not because compression isn’t a skill. It is. But because the assumption underneath the requirement — that the reader needs the sender to do the compression work — is no longer true. And when that assumption breaks, the entire ethical architecture of the short letter breaks with it.
However, do mail me a handwritten note. That is always a nice touch.
BTW, this is a long post, cut and paste it into your AI and prompt it
The Problem With the Short Letter
Here’s what nobody says out loud about the short letter: it isn’t neutral.
When someone compresses their resume to one page, they are not simply editing for clarity. They are making editorial decisions — strategic ones — about what you see and what you don’t. They are optimizing for a version of themselves that fits through the filter you’ve constructed. Every cut is a judgment call made in their interest, rationalized as a courtesy to yours.
The one-page resume doesn’t give you the person. It gives you the person’s best guess about what you want to see, filtered through their anxiety about getting the meeting.
The ten-slide pitch deck doesn’t give you the business. It gives you the ten things the founder thinks will generate the most interest in the least time, compressed to the bullets least likely to raise objections, sequenced to build momentum toward a yes before the uncomfortable questions have time to surface.
The five-bullet project summary doesn’t tell you what happened on that job. It tells you what the CM wants you to remember about what happened on that job. The cost overrun isn’t in there. The eighteen-month schedule extension isn’t framed as a failure — it’s repositioned as “successfully navigated supply chain disruption.” The project that didn’t get referenced at all is the one you should probably ask about.
Compression isn’t just efficiency. Compression is editorial control. And when the other party controls the compression, they control the narrative.
We’ve spent decades optimizing for formats that serve the sender, not the reader. We just convinced ourselves it was the other way around.
The cruelest part is that we made it a virtue. We told people that if they couldn’t say it in one page, they didn’t understand it well enough. We created a culture where the ability to compress was treated as the measure of mastery. We rewarded the people who were best at managing what we saw, and called it rigor.
It wasn’t rigor. It was information asymmetry dressed up as professionalism.
What Changed
I’ve spent the last few years building AI infrastructure for the construction industry — a world that runs almost entirely on compressed, curated, strategically edited information. Prequalification packages. SOQ submissions. Project experience narratives. Reference letters from people who were briefed before they were nominated.
In that work, I’ve been building systems that ingest professional data at scale — resumes, project experience sheets, LinkedIn profiles, reference letters, business strategies, technical submittals. The more we’ve built, the more I’ve noticed something that keeps stopping me.
The fifty-page resume is actually better.
Not for a human to read. A human should never have to read a fifty-page resume. The cognitive load alone would make the exercise counterproductive. But for an AI? The fifty-page resume is a gift. More signal. More context. More raw material. The AI doesn’t get fatigued. It doesn’t skim. It doesn’t decide that page four is probably redundant and jump to the summary. It reads everything, reconciles everything across sources, surfaces inconsistencies, identifies patterns across the full body of work — and then tells me what’s actually there, not what the applicant hoped I’d notice.
The same is true for a business strategy document with all the appendices intact. The methodology section you usually cut because the deck feels too heavy. The competitive analysis that didn’t make it into the body because it complicated the narrative. The market data you built but buried in an appendix no one reads. The risk register you constructed but didn’t include because you didn’t want to lead with uncertainty.
Send it all.
Because my AI doesn’t process the document the way you’re afraid I will. It doesn’t get bored at the methodology section. It doesn’t skip to the financials. It doesn’t form a first impression on slide three and spend the rest of the deck confirming it. It reads the whole thing — every word of it — and surfaces what I actually want to know, not what you decided I should want to know.
This is a different kind of reading. And it changes everything about what you should send me.
The Shift Nobody Is Talking About
We talk endlessly about AI making people more productive. Faster drafting, faster coding, faster analysis. The productivity narrative dominates. What stays oddly quiet in that conversation is what AI does to the compression premium — the value that used to accrue to whoever did the distillation work.
For a century, the short letter was valuable precisely because someone had to read the long one. The short letter spared the reader that burden. The work of compression was real work, and the reader was the genuine beneficiary. A good executive assistant who could distill a forty-page report into a three-paragraph brief was worth their weight in gold because they were translating cognitive labor into decision-ready clarity.
AI eliminates that burden on the reader’s side.
Which means the compression work the sender does is no longer a gift to the reader. It’s a filtering mechanism that serves the sender. The sender who submits a compressed, curated, carefully edited package is no longer doing me a favor. They are making decisions about what I see before I’ve had a chance to decide what I want to know. And I now have a way to opt out of receiving only what they want me to see.
This is a fundamental shift in information dynamics — and most people haven’t noticed it yet because AI adoption is still uneven enough that the old social contract around brevity still feels operative. It won’t for long.
The executive who required one-page memos was rationing attention. Reasonable. They had scarce cognitive resources and needed other people to pre-process information on their behalf. The one-page requirement was a delegation of judgment — a necessary one, given the constraints. There was no other way.
But I’m not rationing attention the same way anymore. My AI handles the pre-processing. What I need from you is raw signal — all of it — not your pre-processed version of what you think I want. The one-page requirement that used to protect my time now just limits my visibility.
And here is the thing that nobody wants to say yet, but that I think is true: the people who figure this out first will have a meaningful information advantage over those who don’t. If I’m reading long-form with AI assistance and my competitors are still reading compressed packages at human speed, I know more than they do. I see what they don’t. And in a world where decisions are made on incomplete information, that asymmetry compounds.
What I Actually Want Now
Send me the long business strategy. All of it. The appendix. The version history. The scenario analysis you built but didn’t include because it complicated the story. The sensitivity tables. The bear case you’re afraid to show anyone because it makes the deal look fragile. The assumptions log that nobody ever reads but that contains the most important information in the entire document — what you had to assume to make the numbers work.
Send me the full resume. Every project. Every role. The jobs that didn’t work out. The transitions that look awkward on paper. The things that didn’t fit neatly into the conventional narrative of upward career progression. The project you left before completion. The firm you spent two years at that nobody has heard of. The gap year that you’ve been papering over with vague language about “independent consulting.”
Send me the complete RFP response. Don’t edit for length. Don’t strip out the qualifications narrative because you thought I’d skip it. Don’t compress the project descriptions to bullet points because you were afraid of overwhelming me. Don’t leave out the methodology section because it feels too technical. Don’t omit the project you’re proud of because it doesn’t fit the exact scope parameters I defined — let me decide if the analog is close enough.
My AI will read it. All of it. And it will tell me what I want to know, not what you decided I should want to know.
The irony — and this is the part that consistently surprises me — is that long-form is actually more honest. It’s structurally harder to maintain a curated narrative at length. The more you write, the more the real picture emerges. Inconsistencies surface across sections. The actual depth, or shallowness, of experience becomes visible. The gaps that disappear in a tight summary start to appear when there’s enough text to triangulate against. The claim that doesn’t hold up under cross-reference with the supporting data two appendices later — that claim only survives in the short version.
Long-form doesn’t just give me more data. It gives me truer data.
The Construction Industry Case
This lands with particular force in AEC, which is where I spend most of my time.
Professional qualification in construction has always been a compressed-format problem. One-page resumes for project managers with thirty years of experience across a hundred projects. Five-bullet project summaries for jobs that ran four years, involved fifteen hundred submittals, and generated eight million dollars in change orders. Three references, carefully selected from the universe of people who will say the right things in the right way.
The entire prequalification infrastructure of the construction industry is built on the assumption that owners have limited bandwidth to process qualification data, so firms must compress it for them. That compression is exactly where the gaming happens. The carefully selected projects — the ones that match the owner’s stated criteria while omitting the ones that don’t. The strategic omissions — the projects that ran over, the claims that went to litigation, the owner relationships that didn’t survive the job. The references who were briefed before they were nominated, who know what questions are coming and have their answers ready.
Owners end up selecting teams based on the best-curated version of a firm’s experience rather than the most complete one. They make billion-dollar trust decisions based on ten-page packages that were assembled by marketing professionals whose job is to make the firm look as good as possible within the rules of the game.
And here is the part that should genuinely disturb anyone who has ever hired a construction team: the people named in the package are often not the people who show up to do the work.
A firm wins a project on the strength of a senior project executive’s track record. The owner reviews the resume, checks the references, approves the team. And then, six months after NTP, that project executive is running two other jobs and a junior PM who wasn’t named in the proposal is making the day-to-day decisions. The owner has no mechanism to detect this. The key personnel clause in the contract gives them theoretical recourse — but in practice, by the time the substitution is visible, the project is underway and the leverage is gone.
The compressed format makes this possible. If the owner had submitted every member of the proposed team to a thorough, multi-source, AI-mediated qualification review at the time of selection, the substitution problem wouldn’t disappear — but it would be much harder to hide. Because the review would have created a record. A detailed, specific, cross-referenced record of who was promised and what was claimed about them.
Trust But Verify — At Scale
Reagan’s dictum — trust but verify — has always sounded better than it worked in practice. The verify part requires resources. Time. Expertise. Access to information that may be distributed across dozens of sources. For most organizations making hiring and procurement decisions, the verify function is more aspiration than reality. You trust because verifying is expensive.
AI changes the economics of verification.
What used to require a team of researchers, a background check vendor, a network of industry contacts, and weeks of calendar time can now be done in hours. Not perfectly. Not without error. But systematically, at scale, across every member of a proposed team simultaneously — not just the two or three senior names you had bandwidth to check manually.
Here is what systematic, AI-mediated verification actually looks like in practice, and why long-form submission is the prerequisite for it working:
Layer one is self-attestation. The professional submits everything — their full resume, their complete project history, their LinkedIn profile, their experience sheets. All of it, uncompressed. They attest that what they’ve submitted is accurate. This is the starting point, not the ending point. Self-attestation is table stakes. It creates a record. It establishes a baseline. It also establishes liability — if you’ve attested to something false, that’s a different problem than if you simply omitted it because the format didn’t have room.
Layer two is machine verification. The AI cross-references what was claimed against what’s findable. Project records. Company histories. Public permit data. News archives. Professional licensing databases. Industry publications. The goal isn’t to catch people lying — most people aren’t lying outright. The goal is to surface the inconsistencies that emerge naturally when someone’s self-reported narrative is triangulated against independent sources. The project that was claimed as a hundred-million-dollar job that public records show was a forty-million-dollar job. The role described as “project executive” on a job where the person’s LinkedIn shows they left the firm before that project reached substantial completion. The reference listed who, when cross-referenced against the project timeline, was no longer at the firm when the work was done.
These aren’t necessarily disqualifying discoveries. Sometimes there’s a perfectly good explanation. But they’re the things you want to know, and they only surface when you have enough data to triangulate — which means you need the long-form submission, not the compressed one.
Layer three is human validation. References — but not the kind you’re used to. Not three people the candidate called in advance and briefed on what to say. References tied to specific projects and specific roles, submitted directly to a third party rather than through the candidate, structured around what the professional actually did rather than their general character and work ethic.
A reference letter that says “John is a pleasure to work with and I would hire him again in a heartbeat” tells you almost nothing useful. A reference letter that says “John served as project executive on the $180M terminal expansion at [Airport], from design development through beneficial occupancy. He managed the GC relationship, led the owner’s review process for all major change events, and was instrumental in resolving a significant design conflict that emerged during the enclosure phase — here is specifically how he handled it” — that tells you something. That’s a reference you can actually use.
The reason we don’t get references like that today is partly because the format doesn’t demand it, and partly because the candidate controls the process. Change both of those things, and the reference becomes a real verification instrument rather than a professional courtesy.
When you stack all three layers — self-attestation, machine verification, and human validation — you have something that doesn’t exist anywhere in the construction industry today: a portable, progressively verified professional record that belongs to the individual, travels with them across firms and projects, and gets more valuable with every engagement they complete.
That record is only possible because the professional submitted long-form data in the first place. The compressed package can’t support this level of verification. There isn’t enough there to triangulate against. The one-page resume was designed to prevent exactly the kind of scrutiny that makes trust meaningful.

