CSC581 Homepage

Topics in AI: Introduction to Machine Learning with Support Vector Machines

Spring 2016

Instructor:

Dr. Lutz Hamel
Tyler, Rm 251
Office Hours: Tuesday 12:30-1:30pm and Thursday 2-3pm
email: hamel@cs.uri.edu

Description:

Support vector machines (SVMs) belong to a new class of machine learning algorithms with their origins firmly rooted in statistical learning theory. Due to the strong theoretical foundation these algorithms possess desirable properties such as the ability to learn from very small sample sets and a firm estimation of the generalization capacity of the learned model. These properties make this new class of learning algorithms extremely attractive to the practitioner who is frequently faced with "not enough data" and needs to understand "how good a constructed model" actually is. The fact that SVMs have surpassed the performance of artificial neural networks in many areas such as text categorization, speech recognition and bioinformatics bears witness to the power of this new class of learning algorithms.

This course is an introduction to machine learning and SVMs. We begin by framing the notion of machine learning and then develop basic concepts such as hyperplanes, features spaces and kernels necessary for the construction of SVMs. Once the theoretical groundwork has been laid we look at practical examples where this class of algorithms can been applied. Here we use machine learning as a knowledge discovery tool. We will use the statistical computing environment R for our experiments.

The goals of this course are for you,

Announcements:

*** The final is due Tuesday 5/10 @ midnight in Sakai ***

[4/14/16] posted assignment #8
[4/7/16] posted assignment #7
[3/30/16] Posted assignment #6
[3/16/16] ** Midterm: due Thursday 3/31 in Sakai **
[3/14/16] ** no class Thursday 3/17 **
[3/14/16] Hint for assignment #5 - for data sets with large number of independent variables you don't have to plot all the distributions, you should only plot the distributions of variables that are "interesting" ie not normally or close to normally distributed.
[3/9/16] posted solutions to #4.
[3/8/16] posted assignment #5.
[2/29/16] posted assignment #4.
[2/29/16] Posted solutions for assignment #3.
[2/24/16] Hint for assignment #3: b value interval...[-20,20] with step .1 is sufficient and you can plot the decision surface with its corresponding margin as:
plot.decision.surface <- function(w,b) {
	slope = -(w[1]/w[2])
	offset =  (b)/w[2]
	offset1 =  (b+1)/w[2]
	offset2 =  (b-1)/w[2]

	cat("slope = ", slope, "offset = ", offset,"\n")
	
	# plot the decision surface with supporting hyperplanes
	abline(offset,slope,lty="solid",lwd=2,col="green")
	abline(offset1,slope,lty="dashed")
	abline(offset2,slope,lty="dashed")
}
[2/20/16] posted solutions to assignment #2
[2/17/16] posted assignment #3
[2/3/16] posted assignment #2
[2/2/16] here are two data sets to use for 1.4: mammals and biomedical
[1/29/16] CSC581 Sakai page is now live.
[1/29/16] A majority label classifier is a model that ignores all other information except that is counts the number of occurrences of each label in the target attribute. The model itself is a function that regardless of the object that you hand it always returns the label with the largest number of occurrences -- the majority label.

Here are some R hints for assignment #1 problem 1.4, assume that training.df is a data frame where the last attribute is a categorical attribute (an attribute with labels), here is R code that computes the majority label for the attribute:

	n <- ncol(training.df) # number of columns in a frame
	target.attribute <- training.df[[n]] # another way of retrieving columns from a frame using the [[ ]] notation
	target.levels <- table(target.attribute) # tabulate the levels in the target attribute
	ix <- which.max(target.levels) # find out which level appears most often
	majority.label <- names(target.levels[ix]) # convert the level descriptor into a string
Here is a learner that constructs a model that given an appropriate object will always return the first label found in the training data set.
# The file contains a function 'learner' that constructs
# a model function  based on the first label of the
# dependent attribute in the training data that it finds.
# The learner makes the assumption that the
# dependent attribute is always the last column
# in the training data.
#
# use: 
#  model <- learner() # to build the model
#  model(x) # to make some prediction of object x

learner <- function(training.df) {

	# make sure we are actually handed a data frame
	if (!is.data.frame(training.df))
	   stop("not a data frame")
	
	# find the number of columns
	n <- ncol(training.df)
    
    # find the first label in the training data
	label <- training.df[[1,n]]

    # build our model
    # our model is a **function** that given any object always returns the label that appeared first in the
    # training data
    function(x) label
}
Here is an example how to use this learner and the model it builds. Assume that the above code was saved in the file 'first-label.r' in some directory.
### get the current working directory
> getwd()
[1] "/assignment #1"
### my function is saved in the 'code' subfolder
> setwd("code")
### read in the function definition
> source("first-label.r")
### let's make sure the learner is what we expect it to be...
> learner
function(training.df) {

	# make sure we are actually handed a data frame
	if (!is.data.frame(training.df))
	   stop("not a data frame")
	
	# find the number of columns
	n <- ncol(training.df)
    
    # find the first label in the training data
	label <- training.df[[1,n]]

    # build our model
    # our model always returns the label that appeared first in the
    # training data
    function(x) label
}
### looks good ... load a data set
> data(iris)
### build a model
> m <- learner(iris)
### let's take a look at the model -- the model consists of a function with an appropriate
### environment that has the variable 'label' defined.
> m
function(x) label
environment: 0x100d0cdb8
### build a data frame that only has object descriptions, no labels
> objects <- subset(iris,select=-Species)
### apply the model to the first object ... note the notation!!
> m(objects[[1,]])
[1] setosa
Levels: setosa versicolor virginica
### apply the model to the 100th object
> m(objects[[100,]])
[1] setosa
Levels: setosa versicolor virginica
Now, putting all of this together you should be able to solve 1.4.
[1/28/16] Posted assignment #1 - see below
[1/26/16] Welcome!

Documents of Interest:

Data Sets:

Many of the packages above have accompanying data sets.  But the premier source for experimental machine learning data sets is the UCI  Machine Learning Repository. The Statlib library at CMU is another great place to look for data.

Assignments: