Q1. Who introduced the PAC Learning model?
A) Geoffrey Hinton
B) Leslie Valiant
C) Yann LeCun
D) Andrew Ng
Answer: Leslie Valiant
Q2. In PAC learning, what does “Probably” refer to?
A) Accuracy of hypothesis
B) Training time
C) Confidence (1−δ)
D) Error rate
Answer: Confidence (1−δ)
Q3. In PAC learning, the “Approximately” refers to:
A) Training samples
B) Model complexity
C) Error bound (ε)
D) Model runtime
Answer: Error bound (ε)
Q4. The instance space in PAC learning is commonly defined as:
A) Real numbers
B) Text data
C) Binary vectors of length n
D) Images
Answer: Binary vectors of length n
Q5. What is the range of the concept function c: X → ?
A) {−1, 1}
B) {0, 1}
C) {0, ∞}
D) ℝ
Answer: {0, 1}
Q6. What kind of distribution are training examples drawn from in PAC learning?
A) Gaussian distribution
B) Unknown but fixed distribution
C) Uniform distribution
D) Zipf distribution
Answer: Unknown but fixed distribution
Q7. What is the meaning of a hypothesis being “PAC correct”?
A) Exact match with the concept
B) Zero error
C) Approximately correct with high probability
D) Derived through a neural net
Answer: Approximately correct with high probability
Q8. What is the full form of PAC in PAC learning?
A) Precise and Clear
B) Probably Accurate Concept
C) Probably Approximately Correct
D) Practically Accurate Classification
Answer: Probably Approximately Correct
Q9. Which of the following classes is known to be PAC-learnable?
A) General DNF
B) Boolean Circuits
C) Monotone DNF
D) RSA Functions
Answer: Monotone DNF
Q10. The VC dimension quantifies:
A) Runtime of algorithm
B) Number of training epochs
C) Complexity of hypothesis class
D) Dimensionality of features
Answer: Complexity of hypothesis class
Q11. What is the output of a PAC learning algorithm?
A) A neural network
B) A set of training data
C) A hypothesis from the hypothesis class
D) A random classifier
Answer: A hypothesis from the hypothesis class
Q12. The EXAMPLES routine provides:
A) Noisy examples
B) Human-labeled inputs
C) Random labeled examples
D) Negative-only examples
Answer: Random labeled examples
Q13. One-sided error means:
A) The model is always wrong
B) Errors are only on negatives
C) High recall
D) Zero error
Answer: Errors are only on negatives
Q14. Sample complexity in PAC learning is mainly influenced by:
A) Number of layers in model
B) VC dimension
C) Number of CPUs
D) Type of activation function
Answer: VC dimension
Q15. What is the key assumption in standard PAC learning?
A) Noise in labels
B) Non-linear data
C) i.i.d. sampled examples
D) Deep learning model
Answer: i.i.d. sampled examples
Q16. If the VC-dimension is 10, ε = 0.05, and δ = 0.01, then the sample complexity is approximately:
A) O(10)
B) O(100)
C) O(740)
D) O(1000)
Answer: O(740)
Q17. What does the ORACLE routine allow a learning algorithm to do?
A) Predict labels
B) Query whether an instance is positive
C) Delete hypotheses
D) Return VC dimension
Answer: Query whether an instance is positive
Q18. Which class requires both EXAMPLES and ORACLE to be PAC-learned?
A) Boolean circuits
B) Monotone DNF
C) k-CNF
D) General DNF
Answer: Monotone DNF
Q19. Which of the following is not a valid extension of PAC Learning?
A) Noisy PAC
B) Bayesian PAC
C) Logical PAC
D) Incremental PAC
Answer: Logical PAC
Q20. What happens to sample complexity as VC-dim increases?
A) Decreases
B) Remains constant
C) Increases
D) Becomes independent of error
Answer: Increases
Q21. Which condition implies that some Boolean functions are not PAC-learnable?
A) VC-dim = 0
B) Neural net is shallow
C) Cryptographic hardness assumptions
D) Finite sample space
Answer: Cryptographic hardness assumptions
Q22. Why is the general DNF class not known to be PAC-learnable?
A) Requires infinite data
B) Algorithm doesn’t exist
C) It’s NP-hard in general
D) No hypothesis class defined
Answer: It’s NP-hard in general
Q23. The presence of noise in data led to the development of which PAC extension?
A) Incremental PAC
B) Bayesian PAC
C) Noisy PAC
D) Approximate PAC
Answer: Noisy PAC
Q24. For PAC learning, which of the following guarantees is false?
A) Output hypothesis is always 100% accurate
B) Hypothesis has error ≤ ε with probability ≥ 1−δ
C) Uses i.i.d. samples
D) Time and sample bounds are polynomial
Answer: Output hypothesis is always 100% accurate
Q25. In a concept class with infinite VC-dimension:
A) Learning is guaranteed
B) Sample complexity becomes unbounded
C) Only one-sided error is allowed
D) ORACLE queries are unnecessary
Answer: Sample complexity becomes unbounded