Saltar al contenido principal

Quiz — AI-Assisted Coding and Debugging

Quiz 30 min 20 Questions

Section A: AI Coding Concepts (7 Questions)

Question 1

How do AI coding assistants like GitHub Copilot generate code suggestions?

  • A) They execute the code in a sandbox and select the output that passes tests
  • B) They use next-token prediction based on statistical patterns learned from training data
  • C) They search a database of code snippets and return the closest match
  • D) They compile the code and use the AST to generate optimized versions
Show Answer

Answer: B

AI coding assistants use transformer-based language models that predict the most probable next token given the preceding context. They are trained on millions of code repositories and learn statistical patterns. They do not execute code, search a snippet database, or work with compiled ASTs. This is why they can generate plausible-looking code that is functionally incorrect — they optimize for statistical likelihood, not correctness.


Question 2

What is the "context window" of an AI coding model?

  • A) The visible portion of code displayed on the developer's screen
  • B) The time window during which the model is actively processing a request
  • C) The maximum amount of text (in tokens) the model can process at once
  • D) The number of files the IDE sends to the AI service simultaneously
Show Answer

Answer: C

The context window is the maximum number of tokens (words, sub-words, or characters) that the model can "see" and process in a single request. For example, GPT-4o has a 128K token context window, and Claude 3.5 Sonnet has a 200K token window. Information outside this window is invisible to the model, which is why long codebases may not be fully understood.


Question 3

Which of the following is a hallucination in AI-generated code?

  • A) The model generates a function that is too slow for production use
  • B) The model suggests calling pandas.smart_merge(), a function that doesn't exist
  • C) The model writes code that doesn't follow PEP 8 style guidelines
  • D) The model generates code that is functionally correct but hard to read
Show Answer

Answer: B

A hallucination occurs when the AI generates content that is factually incorrect but looks plausible. Suggesting a non-existent function like pandas.smart_merge() is a classic hallucination — the model extrapolates from patterns in its training data and invents something that doesn't exist. Slow code (A), style violations (C), and poor readability (D) are real issues but not hallucinations.


Question 4

What is the primary reason AI coding tools sometimes generate insecure code?

  • A) The models are intentionally designed to ignore security
  • B) Security patterns are harder to learn from static code analysis
  • C) The training data contains both secure and insecure code, and the model optimizes for common patterns, not security
  • D) AI models don't have access to the OWASP Top 10 documentation
Show Answer

Answer: C

AI models learn from all the code in their training data — including the vast amount of insecure code found in tutorials, Stack Overflow answers, and older repositories. Since insecure patterns (like string concatenation for SQL) are common in training data, the model may reproduce them unless security is explicitly requested in the prompt. The models are not intentionally insecure (A), they can learn security patterns (B), and they have seen security documentation (D).


Question 5

When should you NOT use AI coding assistants?

  • A) Writing boilerplate CRUD endpoints
  • B) Generating unit tests from existing functions
  • C) Implementing cryptographic algorithms for production
  • D) Converting code from Python to JavaScript
Show Answer

Answer: C

Cryptographic implementations require mathematical precision and security expertise that AI models cannot guarantee. A subtle error in a crypto implementation (wrong padding, weak random number generation, incorrect key derivation) can make the entire system insecure. Boilerplate CRUD (A), test generation (B), and language conversion (D) are all excellent use cases for AI.


Question 6

What does the temperature parameter control in an LLM?

  • A) The speed at which the model generates tokens
  • B) The randomness/creativity of the output — low values are deterministic, high values are more varied
  • C) The maximum length of the generated output
  • D) The number of alternative suggestions the model provides
Show Answer

Answer: B

Temperature controls the randomness of token selection. At temperature 0, the model always picks the most probable next token (deterministic). At higher temperatures (0.7–1.0), the model samples from a wider distribution, producing more varied and creative output. For code generation, lower temperatures (0.0–0.3) are usually preferred because they produce more predictable and consistent code.


Question 7

What is the REVIEW framework for evaluating AI-generated code?

  • A) Run, Evaluate, Verify, Inspect, Execute, Watch
  • B) Readability, Edge cases, Vulnerabilities, Integration, Efficiency, Working
  • C) Refactor, Edit, Validate, Improve, Enhance, Write
  • D) Requirements, Endpoints, Validation, Implementation, Execution, Workflow
Show Answer

Answer: B

The REVIEW framework is a systematic checklist: Readability (is the code clean?), Edge cases (does it handle boundaries?), Vulnerabilities (are there security issues?), Integration (does it fit the codebase?), Efficiency (is performance acceptable?), Working (have you actually tested it?). This framework ensures thorough evaluation before accepting AI-generated code.


Section B: Prompt Engineering (6 Questions)

Question 8

Which prompting strategy involves providing examples of the desired input/output pattern before the actual request?

  • A) Zero-shot prompting
  • B) Chain-of-thought prompting
  • C) Few-shot prompting
  • D) Persona prompting
Show Answer

Answer: C

Few-shot prompting provides 1–3 examples of the desired pattern before asking the AI to generate new code following that pattern. This is particularly effective when you want the AI to match your project's specific coding style or conventions. Zero-shot (A) provides no examples, chain-of-thought (B) asks for step-by-step reasoning, and persona (D) sets the AI's role.


Question 9

What are the five components of a well-structured code prompt?

  • A) Language, Framework, Version, Endpoint, Database
  • B) Role, Context, Task, Constraints, Format
  • C) Input, Processing, Output, Error handling, Testing
  • D) Description, Requirements, Acceptance criteria, Test cases, Documentation
Show Answer

Answer: B

A well-structured prompt includes: Role (who should the AI act as?), Context (what's the project background?), Task (what exactly should it do?), Constraints (what rules must it follow?), and Format (how should the output look?). This structure gives the AI maximum clarity about what you need.


Question 10

You need AI to implement a complex rate-limiting algorithm. Which prompting strategy is most appropriate?

  • A) Zero-shot — just ask directly
  • B) Few-shot — provide examples of rate limiters
  • C) Chain-of-thought — ask the AI to reason through approaches before implementing
  • D) Template — provide a code skeleton to fill in
Show Answer

Answer: C

For complex tasks like algorithm design, chain-of-thought (CoT) prompting produces the best results. By asking the AI to first list approaches, compare trade-offs, and explain its reasoning step by step, you get a more thoughtful and correct implementation. Zero-shot is too simple for complex tasks, few-shot works best for following patterns, and templates work for known structures.


Question 11

Which of the following is an example of a bad prompt for code generation?

  • A) "Write a Python function that validates an email address and returns True/False"
  • B) "Make a website"
  • C) "Refactor this function for better readability, keeping the same input/output behavior"
  • D) "Write pytest tests covering happy path, edge cases, and error cases for this function"
Show Answer

Answer: B

"Make a website" is far too vague — it specifies no language, framework, features, design, or constraints. The AI would have to make dozens of assumptions. All other options (A, C, D) are specific enough to generate useful code: they specify the language, the task, and the expected behavior.


Question 12

In a multi-turn conversation with an AI, what should you do when the conversation gets long?

  • A) Start a new conversation from scratch
  • B) Re-summarize the key decisions and constraints before your next request
  • C) Repeat the entire conversation history in each message
  • D) Reduce the temperature to compensate for context loss
Show Answer

Answer: B

In long conversations, the AI may "forget" earlier context due to context window limits or attention dilution. Re-summarizing key decisions and constraints (e.g., "To recap: we're building X with Y constraints...") keeps the AI focused. Starting fresh (A) loses all context. Repeating everything (C) wastes tokens. Temperature (D) doesn't affect memory.


Question 13

What is the Constraint Pattern in prompt engineering?

  • A) Limiting the AI's output to a specific number of lines
  • B) Explicitly stating what the AI should and should NOT do
  • C) Constraining the model to use only certain programming languages
  • D) Setting a maximum token limit for the response
Show Answer

Answer: B

The Constraint Pattern explicitly lists what the AI should do (DO) and what it should NOT do (DO NOT). For example: "DO use parameterized queries. DO NOT use string concatenation for SQL." This pattern is highly effective for security-sensitive code and for avoiding common AI mistakes. It's about the content requirements, not output length (A, D) or language restrictions (C).


Section C: Security (7 Questions)

Question 14

You receive AI-generated code with this line: query = f"SELECT * FROM users WHERE id = {user_id}". What vulnerability does this introduce?

  • A) Cross-Site Scripting (XSS)
  • B) SQL Injection
  • C) Path Traversal
  • D) Insecure Deserialization
Show Answer

Answer: B

This is a classic SQL Injection vulnerability. The f-string directly interpolates user_id into the SQL query without sanitization or parameterization. An attacker could supply user_id = "1 OR 1=1" to retrieve all users, or "1; DROP TABLE users; --" to destroy data. The fix is to use parameterized queries: db.execute("SELECT * FROM users WHERE id = :id", {"id": user_id}).


Question 15

Why is using pickle.load() on untrusted data dangerous?

  • A) Pickle files are very large and can cause out-of-memory errors
  • B) Pickle deserialization can execute arbitrary Python code during loading
  • C) Pickle is slow and will cause performance issues
  • D) Pickle files are not compatible across different Python versions
Show Answer

Answer: B

Python's pickle module can execute arbitrary code during deserialization. A malicious pickle file can contain instructions to run system commands, open reverse shells, or steal data — all triggered automatically when you call pickle.load(). This is why you should never load pickle files from untrusted sources, and should use safer alternatives like JSON, joblib with integrity checks, or ONNX for ML models.


Question 16

What is a "dependency confusion" or "supply chain" attack in the context of AI-generated code?

  • A) When the AI generates code that imports too many unnecessary libraries
  • B) When the AI suggests a package name that doesn't exist, and an attacker registers it with malicious code
  • C) When two packages have conflicting version requirements
  • D) When the AI confuses the syntax of two similar programming languages
Show Answer

Answer: B

Dependency confusion attacks exploit the fact that AI models may suggest non-existent package names that sound plausible. Attackers monitor these AI hallucinations and register the suggested package names on PyPI or npm with malicious code. When developers blindly run pip install, they install the malicious package. This is why you should always verify that suggested packages exist and are legitimate before installing.


Question 17

What is the first thing you should check when an AI suggests installing a new package?

  • A) Whether the package is available in the latest Python version
  • B) Whether the package exists on the official package registry (PyPI/npm) and has significant download numbers
  • C) Whether the package has a beautiful README on GitHub
  • D) Whether the package is written in pure Python
Show Answer

Answer: B

The first verification step is confirming the package actually exists on the official registry (PyPI for Python, npm for JavaScript) and has a credible number of downloads. Fake or malicious packages typically have very low download counts, recent creation dates, and no established community. A nice README (C), Python compatibility (A), and pure Python (D) are secondary considerations.


Question 18

Which of the following should NEVER be pasted into an AI chat tool?

  • A) A generic sorting algorithm implementation
  • B) Production database credentials and API keys
  • C) A public code snippet from Stack Overflow
  • D) An open-source library's documentation
Show Answer

Answer: B

You should never paste production credentials, API keys, passwords, or any secrets into AI chat tools. This data is sent to external servers and may be stored in logs, used for training (on free plans), or exposed through data breaches. Always sanitize your code by replacing real secrets with placeholders before sending to AI. Generic algorithms (A), public snippets (C), and open-source docs (D) are safe to share.


Question 19

Which tool is specifically designed for finding security vulnerabilities in Python code?

  • A) pytest
  • B) Black
  • C) Bandit
  • D) mypy
Show Answer

Answer: C

Bandit is a security-focused static analysis tool designed specifically for Python. It checks for common security issues like SQL injection, use of eval(), hardcoded passwords, insecure hash functions, and more. pytest (A) is a testing framework, Black (B) is a code formatter, and mypy (D) is a type checker — none of them focus on security.


Question 20

Your AI assistant generates a file download endpoint: return FileResponse(f"/uploads/{filename}"). What security improvement is needed?

  • A) Add authentication to the endpoint
  • B) Validate that the resolved path stays within the uploads directory (path traversal prevention)
  • C) Compress the file before sending
  • D) Add a rate limiter to prevent abuse
Show Answer

Answer: B

This code is vulnerable to path traversal attacks. An attacker could request filename = "../../etc/passwd" to access any file on the system. The fix is to resolve the path and verify it stays within the allowed directory:

from pathlib import Path

UPLOAD_DIR = Path("/uploads").resolve()
safe_path = (UPLOAD_DIR / filename).resolve()

if not safe_path.is_relative_to(UPLOAD_DIR):
raise HTTPException(status_code=400, detail="Invalid file path")

While authentication (A) and rate limiting (D) are good practices, the critical fix is preventing path traversal. Compression (C) is irrelevant to security.


Score Card

SectionQuestionsYour Score
A: AI Coding Concepts7 questions (Q1–Q7)___ / 7
B: Prompt Engineering6 questions (Q8–Q13)___ / 6
C: Security7 questions (Q14–Q20)___ / 7
Total20 questions___ / 20

Interpretation

ScoreLevelRecommendation
18–20ExcellentYou have a strong grasp of AI-assisted coding concepts
14–17GoodReview the topics where you lost points
10–13Needs improvementRe-read the module materials and practice with the lab
< 10InsufficientSchedule a review session with your instructor