IV Year –I Semester

MACHINE LEARNING

Course Objectives:

  • The course is introduced for students to

  • Gain knowledge about basic concepts of Machine Learning

  • Study about different learning algorithms Learn about of evaluation of learning algorithms

  • Learn about Dimensionality reduction


Course Outcomes:

  • Identify machine learning techniques suitable for a given problem

  • Solve the problems using various machine learning techniques

  • Apply Dimensionality reduction techniques

  • Design application using machine learning techniques

UNIT I

Introduction: Definition of learning systems, Goals and applications of machine learning, Aspects of developing a learning system: training data, concept representation, function approximation. Inductive Classification: The concept learning task, Concept learning as search through a hypothesis space, General-to-specific ordering of hypotheses, Finding maximally specific hypotheses, Version spaces and the candidate elimination algorithm, Learning conjunctive concepts, The importance of inductive bias.

UNIT II

Decision Tree Learning: Representing concepts as decision trees, Recursive induction of decision trees, Picking the best splitting attribute: entropy and information gain, Searching for simple trees and computational complexity, Occam's razor, Overfitting, noisy data, and pruning. Experimental Evaluation of Learning Algorithms: Measuring the accuracy of learned hypotheses. Comparing learning algorithms: cross-validation, learning curves, and statistical hypothesis testing.

UNIT III

Computational Learning Theory: Models of learnability: learning in the limit; probably approximately correct (PAC) learning. Sample complexity for infinite hypothesis spaces, Vapnik-Chervonenkis dimension. Rule Learning: Propositional and First-Order, Translating decision trees into rules, Heuristic rule induction using separate and conquer and information gain, First-order Horn-clause induction (Inductive Logic Programming) and Foil, Learning recursive rules, Inverse resolution, Golem, and Progol.

UNIT IV

Artificial Neural Networks: Neurons and biological motivation, Linear threshold units. Perceptrons: representational limitation and gradient descent training, Multilayer networks and backpropagation, Hidden layers and constructing intermediate, distributed representations. Overfitting, learning network structure, recurrent networks. Support Vector Machines: Maximum margin linear separators. Quadractic programming solution to finding maximum margin separators. Kernels for learning non-linear functions.

UNIT V

Bayesian Learning: Probability theory and Bayes rule. Naive Bayes learning algorithm. Parameter smoothing. Generative vs. discriminative training. Logisitic regression. Bayes nets and Markov nets for representing dependencies. Instance-Based Learning: Constructing explicit generalizations versus comparing to past specific examples. k-Nearest-neighbor algorithm. Case-based learning.