The starting line
When the firm called me in, the entire practice ran on three things:
shared spreadsheets in a cloud drive, a forest of Word templates
named final_v3_REAL.docx, and email threads carrying
attachments back and forth with clients. Cases lived in heads and
inboxes. Deadlines lived in someone's calendar, maybe.
The brief was small in words and large in scope: get the firm onto a real system. Track cases. Generate documents without retyping client details. Let clients see their own case status without a phone call. Run it on AWS for trust reasons. Don't break anything during the rollout.
Constraints that shaped every decision
The non-negotiables drove the design more than the feature list did.
- Attorney-client privilege. Two cases in the same firm, on opposing sides of a regulatory matter, cannot bleed data into one another. Not in search results, not in document references, not in recently-viewed lists, not anywhere.
- Single tenant by default. The firm wanted their data on their own infrastructure, not shared with anyone else's. That ruled out off-the-shelf multi-tenant SaaS and set the AWS layout.
- Solo build. No team, no on-call rotation. Anything I shipped had to be operable by one person — and recoverable by them.
- Daily use from day one of rollout. No staging-only periods. The firm needed to keep practicing law while we migrated.
Architecture in one picture
┌─────────────────────────────────────────┐
│ Cloudflare DNS │
└────────────────────┬───────────────────┘
│
┌────────▼────────┐
│ Nginx (EC2) │ TLS, gzip, rate limits
└────────┬────────┘
│
┌──────────────────▼──────────────────┐
│ Laravel monolith (PHP-FPM) │
│ - Inertia.js → Vue SPA pages │
│ - Policy-driven authorization │
│ - Document automation pipeline │
│ - Queue worker (database driver) │
└──┬──────────────┬──────────────┬────┘
│ │ │
┌──────▼─────┐ ┌─────▼─────┐ ┌─────▼─────┐
│ MySQL RDS │ │ S3 │ │ SES (mail)│
│ (private) │ │ documents │ │ │
└────────────┘ └───────────┘ └───────────┘
Single VPC, two subnets, one Nginx-fronted EC2 running PHP-FPM, one RDS instance in the private subnet, S3 for documents with server-side encryption, SES for transactional email. No microservices, no Kubernetes, no Redis cluster. A monolith was the right answer for a sole engineer running a firm-sized workload.
The document automation engine
Half the pain at the firm was retyping the same client name, case number, court details, and dates into ten different documents. Document automation is what gets a legal platform off the ground.
The core idea is simple: documents are templates with named placeholders, rendered against a case context. The surface area is small, but the failure modes are nasty in legal work — a wrong court address on a pleading is not just embarrassing.
Templates as data, not as files
Templates live in the database with their placeholder map, version, and an immutable hash. When an attorney creates a new template, we don't overwrite the old one — we add a new version row. Every generated document records the exact template version it was rendered from. If we ever need to ask "what did we send last June?", the answer is reproducible.
// app/Models/DocumentTemplate.php
public function renderFor(Case $case, array $overrides = []): string
{
$context = array_merge($case->toTemplateContext(), $overrides);
return $this->compile($this->body, $context);
}
protected function compile(string $body, array $context): string
{
return preg_replace_callback(
'/\{\{\s*([a-z0-9_.]+)\s*\}\}/i',
fn ($m) => Arr::get($context, $m[1], '«missing:'.$m[1].'»'),
$body
);
}
The «missing» sentinel is intentional. A silently-blank
field on a legal document is worse than an obviously-broken one.
Reviewers catch the angle brackets immediately.
Generation pipeline
When a document is requested, the request is queued, rendered to DOCX, then archived to S3 with a content-addressed key. The case record holds a foreign key to the generated artifact, not the artifact itself. Documents become first-class entities with their own ACL, audit trail, and download history.
Access control under attorney-client privilege
The interesting problem on this build wasn't the documents — it was
the access model. The naïve approach in Laravel is to put a
user_id column on everything and call it a day. That
model is wrong for a law firm.
A case has multiple people legitimately attached to it: a lead attorney, an associate, a paralegal, the client. Each role sees a different surface. The client must never see internal billing notes. The associate must never see cases they were not assigned to. Searches and "recently viewed" lists must respect the case membership boundary, not just per-record permissions.
Membership over ownership
I built it around a case_members pivot table with a
role enum. Every read query that touches case-scoped data joins
through it. The Laravel policy layer enforces the same rule at the
record level. The two enforcement points back each other up — if
one is bypassed, the other still holds the line.
// app/Models/Concerns/ScopedByCaseMembership.php
public function scopeVisibleTo(Builder $q, User $user): Builder
{
return $q->whereHas('case.members', fn ($m) => $m->where('user_id', $user->id));
}
// app/Policies/CaseDocumentPolicy.php
public function view(User $user, CaseDocument $doc): bool
{
return $doc->case->members->contains('user_id', $user->id);
}
Every search endpoint, every list endpoint, every report endpoint
passes through visibleTo($user). There is no global
scope on the model — global scopes are easy to forget when writing
a raw query during a hot fix. An explicit scope on every read is
ugly and safe. I chose ugly and safe.
Client portal as a different surface
The client portal isn't a smaller version of the attorney UI — it
is a different surface that happens to read from the same
database. Clients authenticate against a separate guard. The
policies on every model return a strict false by
default for the client guard unless the model is
explicitly flagged as client-visible.
Whitelisting visibility, not blacklisting it, was the design principle I trusted most. Forgetting to hide something is a much easier bug to write than forgetting to expose something.
AWS infrastructure decisions
Why a monolith on one EC2 instance
Tempting alternative architectures for a 2026 portfolio piece: ECS Fargate, Lambda for document generation, ElastiCache, a queue on SQS, observability on CloudWatch Logs Insights. All defensible. None of them were the right call here.
The firm's workload is dominated by maybe a dozen concurrent users and a handful of background document generations per day. A single appropriately-sized EC2 instance handles it with headroom. ECS would have added control-plane operations I did not want to run alone. Lambda would have added cold-start variance on document generation that the firm would feel. The monolith is boring; the monolith ships.
What I did invest in
- RDS automated backups with point-in-time recovery enabled. The cost is small. The peace of mind is large.
- S3 versioning on the documents bucket. A deleted document is recoverable for ninety days. An overwritten document is recoverable.
-
SSM Parameter Store for secrets. Nothing sensitive lives
in the
.envon disk. The deploy script pulls secrets at boot. - Per-case S3 prefix with bucket policies that scope object keys to case-membership claims in pre-signed URLs. A pre-signed URL leak still cannot grant access to another case's documents.
What broke, or surprised me
Templates are software, not content
I shipped the first version of templates editable as freeform
rich text. Attorneys promptly added handcrafted placeholder syntax
like [CLIENT NAME] alongside the real
{{ client.name }} tokens. Renders looked correct
until they didn't. The fix was a template-authoring UI that
inserts placeholders as visual chips, not as raw text. Templates
had become software — they needed an authoring environment, not a
text box.
Mobile portal usage was double what I assumed
The client-side traffic skewed heavily toward phones. The original portal was responsive but designed for the desktop case detail first. After the first month I rebuilt the mobile portal around a short single-column flow with case status, latest documents, and a message-attorney button — three taps from the home screen to anything that mattered.
Document download spikes
The firm occasionally ran end-of-month exports that pulled a burst of large documents from S3. The first version streamed them through the Laravel app. After watching the EC2 instance sweat, I moved downloads to pre-signed S3 URLs with short TTLs scoped to the requesting user's case membership. The app issues the URL, S3 serves the bytes. The app's job is authorization, not file delivery.
What it does today
- Active daily use across the firm — attorneys, paralegals, and the firm's clients.
- Case tracking with status, deadlines, and a per-case timeline.
- Document automation generating standard pleadings, contracts, and client correspondence from versioned templates.
- Client portal with case status, document downloads, and direct messaging to the assigned attorney.
- Audit trail of who viewed, edited, or downloaded what — visible to firm administrators.
The spreadsheet workflow is retired. The firm has not asked to go back.
What I'd do differently next time
- Filament admin from day one. I built the internal admin screens by hand. Filament would have given the firm a self-service surface for managing users, roles, and lookup tables without me. Cheap leverage I left on the table.
- Horizon for the queue. The database queue driver is fine at this scale, but I would not bet on it scaling with the firm. Redis plus Horizon gives me visibility and back-pressure I currently squint at logs to find.
-
Tenant-ready data model on day one. The schema is
single-tenant by design. If a sister firm ever wants the same
platform, I would re-derive the
case_membersmodel with atenant_idseam up front rather than retrofit it. - An export endpoint, properly. The end-of-month export story still feels like a feature waiting for a real specification.
What I take away
The hardest parts of this build were not the parts I could anticipate from a feature list. They were the access model, the template authoring problem, and the small infrastructure choices that decide whether a platform stays operable by one person. Those problems are worth more on this build than any framework choice was.