Use this as your last-mile PMI-CPMAI™ review. Pair it with the Syllabus for coverage and Practice for speed.
The exam’s “decision loop”
Most scenario questions reduce to a repeatable sequence:
- Clarify the business objective and success criteria
- Identify the tightest constraint (privacy/security, regulation, time, data access, risk tolerance)
- Choose the lowest-risk next step that still moves delivery forward
- Make it auditable (document decisions, assumptions, and evidence)
If an answer skips governance, evidence, or stakeholder alignment when the scenario implies it, it’s often wrong.
Responsible & trustworthy AI (Domain I) — minimum viable guardrails
Privacy & security checklist
- Data access is least-privilege and logged
- Encryption at rest and in transit
- PII handling rules + retention policy
- Privacy impact assessment when required
- Clear incident response and escalation path
Transparency & auditability checklist
- Document data selection and preprocessing steps
- Document algorithm/model selection rationale
- Explainability expectations set per stakeholder needs
- Audit trail exists for key decisions and changes
Bias checks checklist
- Representation issues identified in training data
- Fairness testing across relevant groups
- Bias monitoring plan (not just one-time testing)
- Mitigation approach selected (data, thresholds, constraints, review)
Business needs & solutions (Domain II) — framing shortcuts
“Problem statement” template
For (user/persona), who (need/pain), the goal is (measurable outcome), within (constraints), so that (business value).
Feasibility screen (fast)
- Data exists, is accessible, and is fit-for-purpose
- Stakeholders agree on success metrics
- Operational integration is feasible (latency, workflow, ownership)
- Risks are understood and mitigations exist (security, safety, ethics)
Scope and success criteria
- In-scope vs out-of-scope explicitly stated
- KPIs include both business outcomes and model performance
- Acceptance criteria include reliability/monitoring, not only accuracy
Data needs (Domain III) — data readiness checklist
- Required data types, volume, time window, and granularity defined
- Data SMEs identified and engaged (business + technical)
- Sources and ownership mapped; access approved
- Privacy/compliance constraints documented
- Data evaluated: completeness, quality, and representativeness
- Findings communicated clearly to leadership (limits + options)
Model development & evaluation (Domain IV) — go/no-go gates
Technique selection trade-offs
- Better accuracy can mean worse interpretability, cost, or risk
- Choose the simplest approach that meets requirements
- Make constraints explicit (latency, explainability, auditability)
QA/QC and configuration management
- Version models, data, and parameters
- Testing protocols defined (functional + performance)
- Peer review and validation performed before release
Go/no-go decisions
- Data quality meets acceptance criteria
- Model performance meets thresholds and behaves across segments
- Failure modes and mitigations are understood
- Operational monitoring and rollback exist
Operationalize AI (Domain V) — release and runbook
Deployment plan must include
- Integration steps + owners
- Validation criteria and rollout strategy
- Rollback plan and contingency plan
- Monitoring dashboards and alerting thresholds
- Governance plan for updates/retraining
Monitoring “triad”
- Data: drift, missingness, schema changes
- Model: performance proxies, output distribution shifts
- Business: KPI impact, user feedback, error cost
Fast “best answer” eliminators
- Implementing changes without required approvals/governance → usually wrong.
- “Train a better model” when the problem is unclear or the data is weak → usually wrong.
- Ignoring privacy/security/compliance signals in the stem → usually wrong.
- No monitoring/rollback plan for production deployment → usually wrong.