Exam at a glance
- Exam name: AWS Certified Machine Learning Engineer — Associate (MLA-C01)
- Level: Associate
- Questions: 65 total (multiple-choice and multiple-response)
- Time: 130 minutes
- Delivery: Pearson VUE testing center or online proctored exam
- Result: Scaled score (100–1000); minimum passing score: 720
- Cost: 150 USD
- Languages offered: English, Japanese, Korean, Simplified Chinese
Note: AWS includes scored and unscored questions; the exam guide indicates 50 scored + 15 unscored (total 65).
Domain breakdown (weights)
- Domain 1: Data Preparation for Machine Learning (ML) — 28%
- Domain 2: ML Model Development — 26%
- Domain 3: Deployment and Orchestration of ML Workflows — 22%
- Domain 4: ML Solution Monitoring, Maintenance, and Security — 24%
What the exam emphasizes (high level)
Expect scenario-driven items where you choose the best answer for:
- Ingesting, transforming, validating, and preparing data for modeling
- Selecting modeling approaches, training and tuning models, and analyzing performance
- Choosing deployment endpoints and infrastructure, plus orchestration and CI/CD for ML workflows
- Monitoring model inference and infrastructure, managing costs, and securing ML systems
The exam is very SageMaker-forward (Feature Store, Data Wrangler, Model Registry, monitoring and deployment patterns), with a strong MLOps and ops-reliability flavor.
Who should take MLA-C01
This exam is a strong fit for:
- ML engineers and MLOps engineers working with Amazon SageMaker
- Data engineers and backend engineers who deploy ML features into production systems
- DevOps engineers supporting ML pipelines and model delivery
AWS’s intended candidate guidance: at least 1 year of experience using Amazon SageMaker and other AWS services for ML engineering, plus experience in a related role (for example, backend developer, DevOps engineer, data engineer, MLOps engineer, or data scientist).
Study plan (efficient)
- Pick a timeline: 30/60/90-day Study Plan →
- Work the Syllabus task-by-task; drill immediately after each task.
- Keep a miss log: convert misses into one-liner rules (“RAG isn’t the answer here—this is model monitoring”, “Choose serverless vs real-time endpoints based on latency and traffic shape”).
- Final 1–2 weeks: mixed sets + at least a couple timed runs; review every miss.