Operational memory · Knowledge infrastructure
Map the work that holds your library together.
LibraryOps is an advisory practice for library leadership. We surface the tacit workflows, decisions, and process knowledge that already run your institution — and prepare them, responsibly, for what comes next.
Institutional memory is walking out the door, one retirement at a time.
Every library runs on workflows that nobody wrote down — passed shoulder-to-shoulder, refined by people who’ve been in the building for twenty years. When leadership turns to AI, modernization, or strategic planning, that hidden infrastructure is what gets in the way. Or, more often, what quietly disappears.
Decisions live in inboxes and in the heads of retiring staff.
The reasoning behind a vendor switch, a discovery layer customization, or a service-point policy is rarely captured anywhere a successor can find it.
Workflows are tacit, taught by demonstration.
The cataloger’s exception-handling. The ILL escalation path. The reference team’s informal triage. None of it shows up in a procedure manual; all of it shapes patron experience.
AI initiatives stall before they start.
You can’t responsibly automate what you can’t describe. Vendors arrive with confident demos; your team can’t answer questions about how the work actually happens, because the answers were never written.
Documentation culture is uneven across departments.
Some teams have wikis going back fifteen years. Others have a binder. Most have neither. Strategy demands a shared baseline before it demands a roadmap.
Qualitative research is the discovery method. The library is the curriculum.
Five phases, in order. We listen first, write second, and only ever recommend tooling after the work has been mapped. Human judgment is preserved at every step; the deliverables outlast the engagement.
Listen
Semi-structured interviews with staff at every level — circulation, cataloging, reference, IT, leadership. We collect the words people actually use to describe their work.
2–3 weeks · 18–30 conversationsMap
We diagram workflows, decision points, and hand-offs — not as they’re documented, but as they actually run. Annotated, dated, attributed.
Workflow atlas · decision logsSynthesize
Scattered process knowledge becomes a shared documentation layer your team can edit, contest, and extend. The map belongs to the institution, not to us.
Knowledge library · review cyclesEquip
With the work mapped, we assess where AI and automation responsibly fit — and, just as often, where they should not. Every recommendation cites the workflow it touches.
AI readiness reviewSustain
We leave behind a documentation culture — rituals, owners, review cadences — so the operational memory keeps growing after we’re gone.
Playbooks · quarterly check-insEngagements scoped to your operational reality, not a fixed product.
Most institutions start with a memory audit, then choose what to deepen. Each service can stand alone, or stack into a multi-quarter engagement.
Operational memory audit
A focused intake of the institution’s undocumented work. We map where knowledge lives, where it’s thinning, and where a single retirement would create real exposure.
- Stakeholder interviews
- Knowledge-risk register
- Executive readout
Workflow mapping engagement
Department-by-department workflow atlases drawn from observation and interview. The map your successors wish you had left them.
- Annotated process diagrams
- Decision log capture
- Exception handling traces
Documentation culture program
The hard part isn’t writing the documentation. It’s the rituals that keep it alive. We design the ownership, cadence, and review practices that hold.
- Owner & cadence design
- Templates & rubrics
- Onboarding integration
Responsible AI readiness review
Before any model touches your operations, we assess what’s ready, what isn’t, and where automation would damage trust. Every finding cites the workflow it concerns.
- Capability mapping
- Boundary recommendations
- Vendor evaluation rubric
Leadership advisory retainer
A standing relationship for directors and deans navigating modernization. Quarterly working sessions, on-call strategic review, and an ongoing thinking partner.
- Quarterly working sessions
- Decision-stage reviews
- Confidential sounding board
Succession & transition support
When senior staff retire, restructure, or move on, we run a focused capture engagement so their working knowledge does not leave with them.
- Departing-staff interviews
- Knowledge transfer plans
- Successor briefing kits
Artifacts your team owns, edits, and keeps growing.
Every engagement produces concrete deliverables — not slide decks. They’re structured to be edited by the people who do the work, not just the people who commissioned the project.
Workflow atlas
Annotated diagrams of the actual process — including the exceptions, escalations, and hand-offs that the formal procedure misses.
Decision log
A dated record of operational decisions and the reasoning behind them. Backfilled from interviews; maintained going forward.
Process knowledge library
A searchable repository of the tacit knowledge that previously lived in inboxes and tenured staff memory.
AI readiness assessment
A workflow-by-workflow review of where automation responsibly fits, where it doesn’t, and what would need to change first.
Documentation playbook
The rituals, templates, and ownership patterns that keep the institutional memory alive after the engagement ends.
Knowledge-risk register
A standing inventory of where institutional knowledge is concentrated in too few people, with mitigation plans.
AI systems are the implementation layer, not the headline.
We work with AI infrastructure daily. We are deeply unromantic about it. The principles below are non-negotiable across every engagement.
Map before you model.
No automation recommendation without a documented workflow underneath it. The map is the prerequisite, not the deliverable.
Automate analysis, not care.
Patron experience, judgment calls, and the relational work of librarianship stay human. We say no to vendors who pitch otherwise.
Preserve human judgment at every checkpoint.
Models can summarize, surface, and suggest. They do not decide. Every workflow we equip has a named human who signs off.
Document the work first.
If a workflow can’t be described to a junior staff member, it cannot be responsibly automated. Documentation is the gate, not the bonus.
Vendor claims are evidence, not conclusions.
We bring the rubric. Demos are evaluated against your real workflows, not a vendor’s sample data.
Audit trails are infrastructure.
Every automated step needs to be reviewable, reversible, and attributable. If it isn’t, it isn’t ready.
Automate analysis and support. Do not automate away care, trust, human judgment, or the human-centered patron experience.
We work alongside the whole institution, not just the org chart’s top.
Different audiences need different conversations. Pick the one that fits where you sit.
“Before we modernize, we need to know what we actually do.”
Library leadership carries the strategic question alone. We become the practice that helps you answer it — with evidence drawn from your own staff and your own patrons, not from a sector report.
Our advisory retainers are designed for the cadence of academic, public, and special-collection leadership: quarterly working sessions, on-call review at decision moments, and confidential sounding-board access in between.
What you leave with
- A defensible operational picture
- An AI readiness review you can take to the board
- A documentation practice that survives the next leadership transition
“The vendor wants requirements. We need workflows first.”
IT and systems collaborators are often handed a modernization mandate without the operational map underneath it. We close that gap — producing the workflow documentation that lets you specify, evaluate, and integrate without guesswork.
We don’t replace your team. We sit beside it, doing the qualitative discovery work that systems implementations rarely budget for, and that quietly determines whether the project succeeds.
What you leave with
- Workflow specifications grounded in observed practice
- Vendor evaluation rubrics tied to real exceptions
- A shared vocabulary with library leadership
“Finally, someone is writing down what we actually do.”
The most productive interviews we run are with the people who’ve been doing the work for fifteen years and have never been asked to describe it. We come to listen. The map belongs to the institution, but the words in it are yours.
Our process is non-extractive. Staff review every diagram before it’s finalized; nothing is published or shared without explicit consent.
What you leave with
- A documented version of work that’s been invisible
- A successor who won’t start from zero
- A seat at the AI readiness conversation
“Show me how this institution will hold its knowledge through the next decade.”
Trustees, board chairs, and funders increasingly ask whether the libraries they support are positioned for the next ten years. We produce the evidence base that lets leadership answer that question without resorting to vendor talking points.
For grant cycles and strategic-plan updates, our deliverables read as standalone artifacts — honest about what works, what doesn’t, and what the institution still needs to decide.
What you leave with
- An evidence-backed strategic posture
- A clear, ethical AI position
- Documentation suitable for grant and accreditation cycles
07 / Start a conversation
The first call is a conversation, not a pitch.
Tell us what’s pressing. We’ll tell you whether an engagement makes sense, whether it’s urgent, and whether you actually need us — or whether a single working session is enough.
PRACTICE DETAILS