LIA - Oral Presentation Guide
Presentation Overview
Your oral presentation is worth 15% of your LIA grade. You have 15 minutes to present your project, followed by 5 minutes of questions from the instructor. This is your opportunity to demonstrate not only that your system works, but that you understand every decision you made.
View Presentation Flow
Time Allocation
Stick to this time breakdown. Practicing with a timer is essential.
| # | Section | Duration | Slides | Key Content |
|---|---|---|---|---|
| 1 | Title & Introduction | 1 min | 1-2 | Project name, problem statement, your name |
| 2 | Problem & Dataset | 1 min | 1 | Business context, dataset stats, objectives |
| 3 | Model Training & Results | 3 min | 2-3 | EDA highlights, models compared, metrics table, best model justification |
| 4 | API Architecture | 2 min | 1-2 | Endpoints, validation, error handling, framework choice |
| 5 | Live Demo | 3 min | 0 (live) | Health check, valid prediction, invalid input, model-info |
| 6 | Testing | 1.5 min | 1 | Test summary, coverage %, Postman highlights |
| 7 | Explainability | 1.5 min | 1-2 | SHAP/LIME visualizations, key insights |
| 8 | Conclusion & Lessons | 2 min | 1-2 | Summary, lessons learned, what you'd improve |
| Total Presentation | 15 min | 10-14 | ||
| 9 | Q&A | 5 min | 0 | Answer instructor questions |
Going over 18 minutes or under 12 minutes will cost you points. Practice with a stopwatch. If you're running long, cut from the model training details — the demo and explainability are more impressive.
Slide-by-Slide Guide
Slide 1: Title Slide
| Element | Content |
|---|---|
| Title | Your project name (e.g., "Customer Churn Prediction API") |
| Subtitle | LIA — AI Model Deployment |
| Your name | First and last name |
| Date | Presentation date |
| Course | Course code and name |
Start strong. Your title slide should be clean with no clutter. State your project name and jump straight into the problem. Don't spend time on "Thank you for being here" or "Today I will present..." — the audience knows why you're there.
Slide 2: Problem & Context (1 min)
Content to include:
- What problem does your model solve?
- Who benefits from this prediction?
- Why does this matter? (business value)
- Dataset: name, size, source (1-2 bullet points)
Example:
"Telecom companies lose millions annually to customer churn. This project predicts which customers are likely to leave, enabling proactive retention campaigns. I used the Telco Customer Churn dataset — 7,043 customers with 20 features."
Slides 3-4: Model Training (3 min)
Content to include:
- 1-2 key EDA insights (show a chart, not a wall of text)
- Models compared (table with metrics)
- Best model and why you chose it
- One interesting finding from training
| Model | Accuracy | F1-Score | AUC-ROC |
|---|---|---|---|
| Logistic Regression | 0.80 | 0.56 | 0.84 |
| Random Forest | 0.79 | 0.54 | 0.83 |
| XGBoost | 0.81 | 0.63 | 0.87 |
"I selected XGBoost because it has the best AUC-ROC (0.87) and F1-Score (0.63), which matters more than accuracy for this imbalanced dataset."
Slide 5: API Architecture (2 min)
Content to include:
- Framework choice and justification
- Endpoint table (keep it simple)
- One example request/response
- How validation and error handling work
View API Architecture
Slides 6-7: Live Demo (3 min)
This is the most impactful part of your presentation. Show your API working in real-time.
Demo sequence (recommended order):
| # | Action | What to Show | Expected Result |
|---|---|---|---|
| 1 | GET /health | API is running | { "status": "healthy" } |
| 2 | GET /model-info | Model metadata | Model name, version, metrics |
| 3 | POST /predict (valid) | Real prediction | Prediction + confidence |
| 4 | POST /predict (invalid) | Error handling | 422 with validation error message |
| 5 | GET /docs | Swagger UI | Interactive API documentation |
- Pre-start your API before the presentation begins. Don't waste 30 seconds running
uvicorn. - Use Swagger UI (
/docs) for the demo — it's visual and impressive. - Prepare your requests in advance (bookmarked browser tabs or pre-filled Swagger forms).
- Have a backup plan: If the demo fails, show screenshots or a pre-recorded video.
Test your demo on the presentation computer before presenting. Common failures:
- Port already in use (change port)
- Missing dependencies (use
requirements.txt) - Model file not found (check relative paths)
- Firewall blocking connections
Slide 8: Testing (1.5 min)
Content to include:
- Number of tests and test types
- Coverage percentage
- One Postman screenshot (optional)
- One interesting edge case you discovered
Example:
"I wrote 14 automated tests — 5 unit tests, 6 integration tests, and 3 edge case tests. Code coverage is 82%. I discovered that sending negative values for
tenurecaused an unhandled error, which led me to add input range validation."
Slides 9-10: Explainability (1.5 min)
Content to include:
- Method used (LIME, SHAP, or both)
- One global feature importance chart (SHAP summary plot is ideal)
- One individual prediction explanation
- Key insight in plain language
Example narrative:
"Using SHAP, I found that the three most important features are: contract type, monthly charges, and tenure. Month-to-month customers are 3x more likely to churn. Here's a waterfall plot for a specific high-risk customer — you can see that their short tenure and high monthly charges push the prediction strongly toward churn."
Slides 11-12: Conclusion (2 min)
Content to include:
- Summary of achievements (3-4 bullet points)
- 2-3 lessons learned (be honest and reflective)
- 2-3 future improvements
- Final word
Example conclusion:
"In summary, I built a complete AI prediction service — from data exploration to a deployed, tested, explainable API. Key lessons: data quality matters more than model complexity, and writing tests early saves time. If I had more time, I'd add Docker deployment and monitoring with Prometheus."
Handling Q&A (5 minutes)
The Q&A tests whether you truly understand your project. The instructor will ask 3-5 questions.
Common Questions to Prepare For
Detailed Question Bank
Model Training Questions
| Question | What They're Testing |
|---|---|
| Why did you choose [model] over [other model]? | Can you justify decisions with metrics and reasoning? |
| How did you handle class imbalance? | Do you know techniques like SMOTE, class weights? |
| What would happen if you used more/less features? | Do you understand feature selection? |
| Is your model overfitting? How do you know? | Can you interpret train vs test performance? |
| Why did you use [metric] as your primary metric? | Do you understand metric trade-offs? |
| What is cross-validation and did you use it? | Do you understand evaluation methodology? |
| How would you retrain the model with new data? | Do you think about the production lifecycle? |
API Design Questions
| Question | What They're Testing |
|---|---|
| Why FastAPI instead of Flask (or vice versa)? | Can you compare frameworks? |
| How and when is the model loaded? | Do you understand startup vs. per-request loading? |
| What happens if the model file is missing? | Do you handle edge cases? |
| How does Pydantic validation work? | Do you understand your validation layer? |
| What HTTP status code for [scenario]? | Do you know REST conventions? |
| How would you handle concurrent requests? | Do you understand concurrency? |
| How would you add authentication? | Do you think about security? |
Testing Questions
| Question | What They're Testing |
|---|---|
| What is the difference between unit and integration tests? | Do you understand test types? |
| What does 70% code coverage mean? | Do you understand coverage metrics? |
| Is 100% coverage always the goal? | Do you understand testing limitations? |
| What is a test fixture in pytest? | Do you understand your testing framework? |
| How did you test error handling? | Did you test the unhappy paths? |
| What did Postman tests verify that pytest didn't? | Do you understand complementary testing? |
Explainability Questions
| Question | What They're Testing |
|---|---|
| What is the difference between LIME and SHAP? | Do you understand both methods? |
| What does a SHAP value represent? | Can you explain the math simply? |
| Did you find any unexpected feature importance? | Did you think critically about results? |
| Could there be data leakage in your features? | Do you understand this critical concept? |
| Is your model fair? How would you check? | Do you think about bias and ethics? |
| What are the limitations of your explainability analysis? | Do you understand method limitations? |
Q&A Strategy
| Do | Don't |
|---|---|
| Listen to the full question before answering | Interrupt the questioner |
| Say "That's a great question" to buy thinking time | Say "I don't know" and stop there |
| Admit if you don't know, then hypothesize | Make up an answer you're not sure about |
| Relate answers back to your specific project | Give generic textbook answers |
| Be concise — 30-60 seconds per answer | Ramble for 3 minutes |
It's okay to not know everything. A strong response is: "I haven't explored that specifically, but based on what I know about [related concept], I would approach it by [hypothesis]." This shows critical thinking even when you lack the specific knowledge.
Presentation Design Tips
Slide Design Rules
| Rule | Good | Bad |
|---|---|---|
| Text per slide | 4-6 bullet points, key words | Full paragraphs copied from report |
| Font size | ≥ 24pt for body, ≥ 32pt for titles | 12pt text that nobody can read |
| Visualizations | Charts, diagrams, screenshots | Walls of text |
| Colors | Consistent palette, readable contrast | Rainbow colors, light text on light background |
| Code on slides | Short snippets (5-10 lines max) | Full source files |
| Animations | None or minimal | Flying text and spinning transitions |
Content Strategy
Think of your presentation as a story, not a report:
- Hook — Why should the audience care? (Introduction)
- Challenge — What was hard? (Training, data issues)
- Solution — How did you solve it? (API, testing)
- Proof — Does it work? (Demo, explainability)
- Reflection — What did you learn? (Conclusion)
What NOT To Do
Presentation Anti-Patterns
| Anti-Pattern | Why It's Bad | What To Do Instead |
|---|---|---|
| Reading from slides | Shows you don't know the material | Use slides as prompts, speak naturally |
| Showing all your code | Boring, impossible to read | Show 1-2 key snippets, demo the rest |
| Skipping the demo | The demo is 20% of the impression | Always demo, even if briefly |
| No eye contact | Seems unconfident | Look at the audience, not the screen |
| Apologizing ("Sorry, I know this isn't perfect") | Undermines your work | Present confidently, mention improvements in future work |
| Running over time | Disrespectful, loses points | Practice with a timer |
| Live coding during demo | Too risky, wastes time | Have everything running before you start |
| Unexplained jargon | Loses the audience | Define terms briefly if needed |
| No backup plan | Demo failures are devastating | Have screenshots or a video ready |
| Thanking everyone at the start | Wastes 30 seconds | Start with the problem statement |
Preparation Checklist
One Week Before
- Slide deck is complete (10-14 slides)
- All visualizations and charts are included
- Demo script is written (what you'll show, in what order)
- First practice run done (check timing)
Three Days Before
- Practice run #2 with timer (aim for 14-15 minutes)
- Q&A questions reviewed and answers prepared
- Demo tested on the presentation computer (or similar setup)
- Backup screenshots saved in slides (in case demo fails)
Day Before
- Final practice run (ideally in front of someone)
- API tested and working
- Slide deck exported to PDF (backup format)
- All files on USB drive or accessible cloud storage
Day Of
- Arrive early, set up equipment
- Start your API before the presentation begins
- Open Swagger UI in a browser tab
- Open your slide deck
- Take a deep breath — you've prepared well
Grading Rubric — Oral Presentation
| Criterion | Weight | Excellent (14-15) | Good (12-13) | Satisfactory (11) | Insufficient (< 11) |
|---|---|---|---|---|---|
| Content Completeness | 25% | All components covered: model, API, tests, explainability | Most components covered | Some components missing | Major components missing |
| Technical Depth | 20% | Deep understanding, justifies all decisions | Good understanding, most decisions explained | Surface-level understanding | Cannot explain decisions |
| Live Demo | 20% | Flawless demo, multiple scenarios shown | Demo works with minor hiccups | Demo partially works | Demo fails or not attempted |
| Communication | 15% | Confident, clear, good pace, engages audience | Good delivery, minor nervousness | Adequate but monotone or rushed | Unclear, reading from slides |
| Slides Quality | 10% | Clean, visual, informative, professional | Good design, minor issues | Acceptable but text-heavy | Poor design, hard to read |
| Q&A Performance | 10% | All questions answered confidently | Most questions answered well | Some questions answered | Cannot answer basic questions |
Example Presentation Outline
Here is a concrete example for a Customer Churn Prediction project:
| Slide | Title | Content |
|---|---|---|
| 1 | Customer Churn Prediction API | Title, name, date, course |
| 2 | The Problem | "26% of customers churn quarterly — $2M annual loss" |
| 3 | Dataset & EDA | Telco dataset, 7043 samples, class distribution chart |
| 4 | Model Comparison | Table: LogReg vs RF vs XGBoost, metrics highlighted |
| 5 | Best Model: XGBoost | AUC-ROC=0.87, confusion matrix, why XGBoost won |
| 6 | API Architecture | Diagram: Client → FastAPI → Model → Response |
| 7 | Endpoints & Validation | Table of endpoints, Pydantic example |
| 8-9 | LIVE DEMO | Swagger UI: /health → /predict → /model-info → error case |
| 10 | Testing Results | 14 tests, 82% coverage, key edge case found |
| 11 | SHAP Analysis | Summary plot: top 3 features explained |
| 12 | Individual Explanation | Waterfall plot for one high-risk customer |
| 13 | Conclusion & Lessons | Summary, 3 lessons learned, 2 future improvements |
| 14 | Thank You / Questions | Contact info, repository link |