Operational memory · Knowledge infrastructure

Map the work that holds your library together.

LibraryOps is an advisory practice for library leadership. We surface the tacit workflows, decisions, and process knowledge that already run your institution — and prepare them, responsibly, for what comes next.

Fieldinterview-led discovery before implementation
Memoryworkflow knowledge that survives transitions
Humanjudgment before any model-driven handoff

Institutional memory is walking out the door, one retirement at a time.

Every library runs on workflows that nobody wrote down — passed shoulder-to-shoulder, refined by people who’ve been in the building for twenty years. When leadership turns to AI, modernization, or strategic planning, that hidden infrastructure is what gets in the way. Or, more often, what quietly disappears.

PROBLEM 01

Decisions live in inboxes and in the heads of retiring staff.

The reasoning behind a vendor switch, a discovery layer customization, or a service-point policy is rarely captured anywhere a successor can find it.

PROBLEM 02

Workflows are tacit, taught by demonstration.

The cataloger’s exception-handling. The ILL escalation path. The reference team’s informal triage. None of it shows up in a procedure manual; all of it shapes patron experience.

PROBLEM 03

AI initiatives stall before they start.

You can’t responsibly automate what you can’t describe. Vendors arrive with confident demos; your team can’t answer questions about how the work actually happens, because the answers were never written.

PROBLEM 04

Documentation culture is uneven across departments.

Some teams have wikis going back fifteen years. Others have a binder. Most have neither. Strategy demands a shared baseline before it demands a roadmap.

Qualitative research is the discovery method. The library is the curriculum.

Five phases, in order. We listen first, write second, and only ever recommend tooling after the work has been mapped. Human judgment is preserved at every step; the deliverables outlast the engagement.

PHASE 01

Listen

Semi-structured interviews with staff at every level — circulation, cataloging, reference, IT, leadership. We collect the words people actually use to describe their work.

2–3 weeks · 18–30 conversations
PHASE 02

Map

We diagram workflows, decision points, and hand-offs — not as they’re documented, but as they actually run. Annotated, dated, attributed.

Workflow atlas · decision logs
PHASE 03

Synthesize

Scattered process knowledge becomes a shared documentation layer your team can edit, contest, and extend. The map belongs to the institution, not to us.

Knowledge library · review cycles
PHASE 04

Equip

With the work mapped, we assess where AI and automation responsibly fit — and, just as often, where they should not. Every recommendation cites the workflow it touches.

AI readiness review
PHASE 05

Sustain

We leave behind a documentation culture — rituals, owners, review cadences — so the operational memory keeps growing after we’re gone.

Playbooks · quarterly check-ins

Engagements scoped to your operational reality, not a fixed product.

Most institutions start with a memory audit, then choose what to deepen. Each service can stand alone, or stack into a multi-quarter engagement.

SERVICE01

Operational memory audit

A focused intake of the institution’s undocumented work. We map where knowledge lives, where it’s thinning, and where a single retirement would create real exposure.

  • Stakeholder interviews
  • Knowledge-risk register
  • Executive readout
SERVICE02

Workflow mapping engagement

Department-by-department workflow atlases drawn from observation and interview. The map your successors wish you had left them.

  • Annotated process diagrams
  • Decision log capture
  • Exception handling traces
SERVICE03

Documentation culture program

The hard part isn’t writing the documentation. It’s the rituals that keep it alive. We design the ownership, cadence, and review practices that hold.

  • Owner & cadence design
  • Templates & rubrics
  • Onboarding integration
SERVICE04

Responsible AI readiness review

Before any model touches your operations, we assess what’s ready, what isn’t, and where automation would damage trust. Every finding cites the workflow it concerns.

  • Capability mapping
  • Boundary recommendations
  • Vendor evaluation rubric
SERVICE05

Leadership advisory retainer

A standing relationship for directors and deans navigating modernization. Quarterly working sessions, on-call strategic review, and an ongoing thinking partner.

  • Quarterly working sessions
  • Decision-stage reviews
  • Confidential sounding board
SERVICE06

Succession & transition support

When senior staff retire, restructure, or move on, we run a focused capture engagement so their working knowledge does not leave with them.

  • Departing-staff interviews
  • Knowledge transfer plans
  • Successor briefing kits

Artifacts your team owns, edits, and keeps growing.

Every engagement produces concrete deliverables — not slide decks. They’re structured to be edited by the people who do the work, not just the people who commissioned the project.

D.01

Workflow atlas

Annotated diagrams of the actual process — including the exceptions, escalations, and hand-offs that the formal procedure misses.

Markdown + SVG
D.02

Decision log

A dated record of operational decisions and the reasoning behind them. Backfilled from interviews; maintained going forward.

Structured log
D.03

Process knowledge library

A searchable repository of the tacit knowledge that previously lived in inboxes and tenured staff memory.

Wiki-ready
D.04

AI readiness assessment

A workflow-by-workflow review of where automation responsibly fits, where it doesn’t, and what would need to change first.

Report + matrix
D.05

Documentation playbook

The rituals, templates, and ownership patterns that keep the institutional memory alive after the engagement ends.

Operating guide
D.06

Knowledge-risk register

A standing inventory of where institutional knowledge is concentrated in too few people, with mitigation plans.

Living register
FIG.02 — deliverables map
SOURCE Interviews 18–30 Observation on-site Artifacts existing docs Synthesis workflows + decisions + risks Workflow atlas Decision log Knowledge library OUTCOME Future-ready knowledge infrastructure

AI systems are the implementation layer, not the headline.

We work with AI infrastructure daily. We are deeply unromantic about it. The principles below are non-negotiable across every engagement.

P.01

Map before you model.

No automation recommendation without a documented workflow underneath it. The map is the prerequisite, not the deliverable.

P.02

Automate analysis, not care.

Patron experience, judgment calls, and the relational work of librarianship stay human. We say no to vendors who pitch otherwise.

P.03

Preserve human judgment at every checkpoint.

Models can summarize, surface, and suggest. They do not decide. Every workflow we equip has a named human who signs off.

P.04

Document the work first.

If a workflow can’t be described to a junior staff member, it cannot be responsibly automated. Documentation is the gate, not the bonus.

P.05

Vendor claims are evidence, not conclusions.

We bring the rubric. Demos are evaluated against your real workflows, not a vendor’s sample data.

P.06

Audit trails are infrastructure.

Every automated step needs to be reviewable, reversible, and attributable. If it isn’t, it isn’t ready.

The line

Automate analysis and support. Do not automate away care, trust, human judgment, or the human-centered patron experience.

We work alongside the whole institution, not just the org chart’s top.

Different audiences need different conversations. Pick the one that fits where you sit.

“Before we modernize, we need to know what we actually do.”

Library leadership carries the strategic question alone. We become the practice that helps you answer it — with evidence drawn from your own staff and your own patrons, not from a sector report.

Our advisory retainers are designed for the cadence of academic, public, and special-collection leadership: quarterly working sessions, on-call review at decision moments, and confidential sounding-board access in between.

What you leave with

  • A defensible operational picture
  • An AI readiness review you can take to the board
  • A documentation practice that survives the next leadership transition

“The vendor wants requirements. We need workflows first.”

IT and systems collaborators are often handed a modernization mandate without the operational map underneath it. We close that gap — producing the workflow documentation that lets you specify, evaluate, and integrate without guesswork.

We don’t replace your team. We sit beside it, doing the qualitative discovery work that systems implementations rarely budget for, and that quietly determines whether the project succeeds.

What you leave with

  • Workflow specifications grounded in observed practice
  • Vendor evaluation rubrics tied to real exceptions
  • A shared vocabulary with library leadership

“Finally, someone is writing down what we actually do.”

The most productive interviews we run are with the people who’ve been doing the work for fifteen years and have never been asked to describe it. We come to listen. The map belongs to the institution, but the words in it are yours.

Our process is non-extractive. Staff review every diagram before it’s finalized; nothing is published or shared without explicit consent.

What you leave with

  • A documented version of work that’s been invisible
  • A successor who won’t start from zero
  • A seat at the AI readiness conversation

“Show me how this institution will hold its knowledge through the next decade.”

Trustees, board chairs, and funders increasingly ask whether the libraries they support are positioned for the next ten years. We produce the evidence base that lets leadership answer that question without resorting to vendor talking points.

For grant cycles and strategic-plan updates, our deliverables read as standalone artifacts — honest about what works, what doesn’t, and what the institution still needs to decide.

What you leave with

  • An evidence-backed strategic posture
  • A clear, ethical AI position
  • Documentation suitable for grant and accreditation cycles

07 / Start a conversation

The first call is a conversation, not a pitch.

Tell us what’s pressing. We’ll tell you whether an engagement makes sense, whether it’s urgent, and whether you actually need us — or whether a single working session is enough.

PRACTICE DETAILS

Engagement length6–14 weeks
Cohort size3 institutions / quarter
Interview scope18–30 staff conversations
Deliverable formatMarkdown, SVG, structured logs
Sectors servedAcademic, public, special
Next intakeQ3 / 2026