PDL Philosophy
Human-Led AI
People bring the wisdom, judgment, context, and connection to the consequences. AI amplifies what people provide.
Starting Point
PDL's mission is to help turn strategy and technology into tools that serve people, not the other way around. AI provides the most value when the human is actively driving. They understand the problem, the context, and have to live with the consequences. AI is a tool in the toolbox. It's not the solution, but can make the solution better. Hence, PDL always advises starting with AI as a thought partner. A soundboard. When used correctly, AI can help generate ideas, rapid prototype, identify and pressure-test assumptions, and ask questions you haven't thought of yet. The focus isn't efficiency. It's a higher quality of thinking and output. That's what amplification means.
Three Principles — PDL's Approach to AI in Business
🧠
Judgment First
AI doesn't replace the business owner's judgment. It extends it. The best AI outputs come from people who already understand the problem they're asking about. A team that hasn't developed that judgment yet will get mediocre outputs regardless of which tool they use. That's why we start with thought partnership, not automation.
📈
Emergent Use Cases, Not Mandated Ones
Mandating use cases before a team has used AI organically produces compliance, not capability. The most durable AI adoption happens when people discover what's useful for their actual work. Those discoveries get documented, refined, and shared. That's the approach here: give the team access, give them time, and capture what emerges.
🔒
Governance Before Scale
AI governance isn't a compliance exercise. It's the infrastructure that makes scaling AI safe. Before Crestline expands into automation, voice agents, or customer-facing AI, the team needs shared guidelines for what AI should and shouldn't be used for, how to handle customer data, and what outputs require human review. Governance built during the pilot is governance that fits the business. A generic policy bolted on after the fact rarely holds.
Why This Fits Crestline
The Right Approach for This Business
Size, structure, and starting point all point the same direction
Crestline's AI Position
AI supports the existing team doing their jobs better. It doesn't replace roles or add headcount. Crestline has 14 people, no dedicated technology function, and an owner who is close to operations. That profile is well-suited for AI as a thought partner and poorly suited for complex AI infrastructure. The right starting point is frontier model access for the whole organization, not a targeted automation project.
Strength
Owner Engagement
The Owner is close to operations and involved in day-to-day decisions. That's an asset for AI adoption — the person with the most context is in a position to use AI the most effectively and model its use for the rest of the team.
Strength
Google Workspace Already in Use
The team is already working in a cloud-based environment with Gmail and Google Docs. Gemini is included in most Google Workspace subscriptions, making it the lowest-barrier entry point. That lowers the adoption barrier for AI tools that integrate with or complement that stack, including NotebookLM for training reinforcement.
Watch
No Dedicated Tech Function
There is no IT or technology owner at Crestline. AI deployment and ongoing management needs to be simple enough for the GSM or Lead Coordinator to maintain. Any tool that requires technical administration is the wrong tool for Phase 1.
Watch
Operational Constraints Come First
Crestline's SPDR score is 52.2. Strategy and process gaps are larger constraints than AI capability gaps. AI won't fix a broken recon process or a missing strategy — it will amplify whatever system it operates within. The business foundation has to be built in parallel.
Systems Fit
Where AI Sits in the SPDR System
AI touches all four systems — but it doesn't fix them
S
Strategy
Thought Partner
AI helps the whole team think and communicate more effectively. The Owner uses it to pressure-test strategic decisions. The GSM drafts vendor SLAs and process documentation. Sales staff write better listings and follow-up messages. The Lead Coordinator handles review responses and scheduling. Same judgment, better output at every level.
P
Processes
Phase 1 Use
Lead response drafts, follow-up templates, listing copy, review responses. AI accelerates repetitive communication tasks without changing the underlying process. Process improvement through AI is a Phase 2 consideration once the pilot reveals where the real friction is.
D
Data
Phase 2
Once IDMS reporting is live, AI can help interpret patterns: days-to-sale trends, buying filter logic, performance anomalies. Not useful until the data foundation exists. In Phase 2, AI can also help ensure Crestline's data infrastructure is built with the future in mind, not just reported on today.
R
Resources
Amplification
AI extends what the existing team can do without adding headcount. Same people, better equipped. The goal isn't doing more of the same things faster. It's producing higher quality thinking and output across the organization.
Phase 1
Foundation — Access, Training, and Organic Pilot
Weeks 6–16 · Concurrent with Business Strategy Phase 1
AI Phase 1 launches concurrent with Business Strategy Roadmap Phase 1, specifically aligned to Initiative 6 (AI Thought Partner Pilot). It does not wait for operational initiatives to complete but runs alongside them. Phase 2 AI work is explicitly triggered by Phase 1 data. Not by calendar.
Tool Selection
A Collaborative Decision — Not a Top-Down One
The team that uses AI daily should have input into which tool they use. The Owner sets the direction, but the GSM, Lead Coordinator, and sales staff are the ones building habits during the pilot. Bring them into the evaluation. Three strong options exist for Crestline, each with a different tradeoff. A fourth option — aggregator platforms like Poe — offers flexibility by bundling multiple models in one interface, but adds complexity during onboarding that may slow adoption for a team at this stage. Switching tools mid-pilot isn't catastrophic. Prompting habits and use cases transfer reasonably well across platforms. That said, consistency during a 60-90 day window reduces distraction and builds cleaner comparison data.
ChatGPT Teams
Broad name recognition. Strong general-purpose capability. Teams plan includes data privacy controls with conversations not used for training. Familiar interface for most users. Plugin ecosystem available if needed in Phase 2.
Claude (Anthropic)
Strong reasoning and long-document capability. Excellent for drafting, analysis, and thought partnership. Teams plan available. Less name recognition but equal or superior capability for Crestline's Phase 1 use cases. Integrates well with Google Workspace workflows.
Gemini (Google)
Lowest barrier to entry. Already included in most Google Workspace subscriptions, so Crestline may have access without additional cost. Strong breadth of capability and deep Workspace integration. May be a step behind Claude and ChatGPT on depth of thought partnership, but a strong starting point for a team already in the Google ecosystem.
Training & Onboarding
Whole-Org Access + Basic Training
All 14 staff get access on day one. Training is basic: how the tool works, what it's good at, what it's not, and the governance guardrails. The goal is to remove the barrier to entry, not to prescribe how people use it. The Owner sponsors the pilot. The Lead Coordinator manages day-to-day questions and use case documentation. NotebookLM serves as the living training reinforcement hub, housing the governance doc, prompt library, best practices, and use case library in one searchable place the team can return to.
📋
Deliverable
AI Governance Framework
Acceptable use guidelines, data handling rules (what customer data can and can't go into AI prompts), output review requirements, and escalation protocol. Written for a 14-person team. One page, plain language, not a compliance document.
📝
Deliverable
Starter Prompt Library
20–30 reusable prompts built around Crestline's actual workflows: lead response drafts, listing descriptions, review responses, follow-up templates, buying analysis, and owner-level strategic thinking prompts. Stored in NotebookLM. Updated as the pilot produces better versions.
📖
Deliverable
Best Practices Guide
How to write a good prompt, how to iterate on AI output, what to review before sending, and what not to delegate to AI. Role-specific guidance for sales staff, the GSM, and the Owner. Stored in NotebookLM alongside the governance doc.
🗂️
Deliverable — End of Pilot
Use Case Library
Documented from the 60–90 day organic pilot. What the team actually used AI for, which use cases produced the most value, and which are candidates for systematization in Phase 2. This library drives the Phase 2 prioritization. Not assumptions made before the pilot ran.
🔬
Deliverable — End of Pilot
NotebookLM Training Hub
A Google NotebookLM instance housing all four deliverables above: governance, prompt library, best practices, and use case library. The team can ask questions of the documents directly, find prompts by workflow, and onboard new staff without a dedicated training session. Living document, updated as the business evolves.
Phase 2
Amplification — Triggered, Not Assumed
Phase 2 AI work doesn't start on a schedule — it starts when the data says it's ready
Why Triggers Matter
Phase 2 AI tools — the buying filter and the Lindy voice agent — are only useful if the underlying data and processes exist to support them. The buying filter needs 60+ days of days-to-sale data by vehicle type and acquisition source. The voice agent needs a functioning lead capture process to hand off to. Deploying either before those conditions are met produces a tool that can't do its job. The trigger conditions below are not milestones to hit for their own sake. They're the minimum foundation each Phase 2 tool requires to work.
AI Buying Filter
Trigger Conditions
✓  60+ days of IDMS days-to-sale data by acquisition source and vehicle type
✓  True GPU (gross profit per unit) with holding cost tracked for same period
✓  Recon SLA established and producing consistent cycle time data
✓  Owner has reviewed data and can articulate which inventory segments perform best
Lindy Voice Agent
Trigger Conditions
✓  IDMS CRM fully activated with lead capture and pipeline tracking live
✓  Lead response protocol established and followed consistently
✓  Post-sale follow-up workflows running for 60+ days
✓  Lead Coordinator role in place to manage agent handoffs
After the Pilot
What Comes Next Depends on What the Pilot Reveals
PDL doesn't assume Crestline is committed to expanding AI. The pilot is designed to find out.
PDL's Commitment
Recommending AI tools for the sake of recommending AI tools is exactly what PDL won't do. The pilot exists to answer a real question: does AI make Crestline's team more effective? If the answer is yes, the follow-up question is where specifically, and that's what determines what comes next. If the answer is no, or not yet, that's equally valuable information. The goal is honest assessment, not a predetermined path to Phase 2.
What Phase 1 Is Designed to Answer
The Questions the Pilot Resolves
60–90 days of organic use produces answers that assumptions never could
Adoption
Is the team actually using it?
Voluntary adoption across a 14-person team is the first signal. If only one or two people are using AI after 90 days, that tells you something important about readiness, the tool choice, or both. Adoption breadth matters before use case depth.
Value
Where is it producing real impact?
The use case library documents this. Not where AI could theoretically help, but where it actually did. The highest-value use cases from the pilot become the candidates for systematization. Everything else stays optional.
Fit
Is this the right tool for this team?
Tool fit matters. A platform that one person loves and twelve people avoid isn't a successful deployment. The pilot surfaces whether the chosen tool matches how Crestline's team actually works, and whether a different option would serve them better.
If the Pilot Is Successful
What a Successful Pilot Unlocks
These are possibilities, not commitments
AI Buying Filter
Data-anchored acquisition decisions
If IDMS reporting is live and producing 60+ days of days-to-sale and GPU data, the buying filter becomes viable. A one-page decision rule that scores acquisition opportunities by projected turn velocity and gross profit per unit. The Owner's judgment stays at the center. AI gives it better data to work with.
Lindy Voice Agent
Automated lead response and follow-up
If the IDMS CRM is fully active, lead protocols are established, and the Lead Coordinator is in place, a voice agent becomes a force multiplier rather than a liability. Handles inbound response, appointment confirmation, and post-sale touchpoints. Not deployed until the infrastructure it hands off to actually exists.
If the Pilot Struggles
What a Struggling Pilot Tells You
Negative signal is still signal
Honest Assessment
Low adoption after 90 days isn't a failure of the team. It's information. It might mean the tool wasn't the right fit. It might mean the timing was wrong — the operational constraints (recon, IDMS, strategy) were too loud for AI to compete for attention. It might mean the training and onboarding needed more structure. PDL reviews the pilot outcomes with the Owner honestly. If the conditions aren't right to expand, we say so. Pushing AI forward when the foundation isn't ready doesn't serve Crestline. It serves a predetermined agenda.
The Question PDL Asks
At the End of Phase 1
"Did AI make your team more effective? If yes, where specifically? That's where we go next."
The answer to that question determines everything that follows. Not a predetermined roadmap. Not a commitment made before the pilot ran. What the pilot reveals is what drives the next decision.
All data in this assessment is synthetic — developed from real-world patterns and industry research to illustrate how PDL engagements work. All real client data is private and confidential.