{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Knowledge Discovery and Data Mining I - Winter Semester 2018/19\n",
"\n",
"* Lecturer: Prof. Dr. Thomas Seidl\n",
"* Assistants: Max Berrendorf, Julian Busch"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial 2: Explorative Data Analysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this tutorial, we want to give you tbe opportunity to do some explorative data analysis on your own. For this purpose we will use a real-world dataset of crime in Chicago. The full dataset is accessible at https://catalog.data.gov/dataset/crimes-2001-to-present-398a4, but we also provide a subset of the data in compressed form at http://www.dbs.ifi.lmu.de/Lehre/KDD/WS1819/tutorials/ChicagoCrime2017.csv.xz.\n",
"You do not need to unpack the file, as `pandas` can also read from compressed CSV files. \n",
"\n",
"The meaning of each column is described at https://data.cityofchicago.org/Public-Safety/Crimes-2001-to-present/ijzp-q8t2 .\n",
"\n",
"If you need any help, feel free to contact your tutor or the assistants."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 0. Imports"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this tutorial, we will need the following packages. Please refer to the Python introduction when you need to know how to install Python packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import numpy\n",
"import pandas\n",
"from matplotlib import pyplot as plt\n",
"import seaborn\n",
"from pandas.plotting import parallel_coordinates"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set default figure size\n",
"from matplotlib import rcParams\n",
"rcParams['figure.figsize'] = (15, 15)\n",
"\n",
"# For wide screens\n",
"from IPython.core.display import display, HTML\n",
"display(HTML(\"\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Load data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, please insert the path of the downloaded file. A relative path will be relative to the notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Fill me\n",
"file_path = None"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use `pandas.read_csv` to read the CSV file into a pandas `DataFrame`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`df.info()` displays type and shape information about the table."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`df.head()` shows the first lines"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For some rows there where missing values.\n",
"They have been replaced by `NaN` in the parser.\n",
"We remove these rows using the following line."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.dropna(how='any', inplace=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Preprocessing"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From the output of `df.info()` above, you can see that the columns where converted to one of the following types: `bool`, `float64`, `int64`, `object`.\n",
"In the following, we will pre-process the data for the visualizations."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Perform the following operations on the columns:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ID columns"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following columns are identifiers, which we will just keep as they are: `'ID', 'Case Number'` (The `ID` is unique per row, whereas the `Case Number` can be used to find multiple crimes connected to one case).\n",
"We set the `ID` as index, and drop `Case Number`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Time"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As loaded, the table has three time-related attributes: `'Year'`, `'Updated On'`, and `'Date'`. As we are only looking at year 2017, the `'Year'` column can be dropped. \n",
"Also the `'Updated On'` is not of use for us.\n",
"Taking a closer look at `'Date'`, we find out that this column carries not only information about the day, but also the time.\n",
"We will create four columns from this column: `month`, `day`, `áºeekday`, `hour`.\n",
"You can use the following snippet to parse the date as given into a `datetime`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"datetime_column = pandas.to_datetime(df['Date'], format='%m/%d/%Y %I:%M:%S %p')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Consult https://pandas.pydata.org/pandas-docs/stable/categorical.html#string-and-datetime-accessors for information on how to work with columns of type `datetime`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Location"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The given data provides location information on several levels:\n",
"* Longitude, Latitude: Geo position\n",
"* Location = tuple (Longitude, Latitude) (_redundant_)\n",
"* X Coordinate, Y Coodinate: position in State Plane Illinois East NAD 1983 projection (_redundant_)\n",
"* Block, Beat, Community Area, District, Ward: All regions in Chicago. They follow the hierarchy Block < Beat < Community Area < Ward < District.\n",
"* Due to missing values in the original data, `Beat, Community Area, District, Ward` have been parsed as float datatype. For visually more pleasing labels, you can first convert them to integers again, before making them categorical.\n",
"\n",
"Apply the following operations on the location-related columns:\n",
"\n",
"|Column|Operation|\n",
"|--|--|\n",
"| 'Longitude' | Keep |\n",
"| 'Latitude' | Keep |\n",
"| 'Location' | Drop |\n",
"| 'X Coordinate' | Drop |\n",
"| 'Y Coordinate' | Drop |\n",
"| 'Block' | Categorical |\n",
"| 'Beat' | Categorical |\n",
"| 'Community Area' | Categorical |\n",
"| 'District' | Categorical |\n",
"| 'Ward' | Categorical |\n",
"\n",
"Hints:\n",
"* You can use [`df.drop(columns=, inplace=True)`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html) to remove a column from a table (working in-place).\n",
"* You can use [`df[] = df[].astype('category')`](https://pandas.pydata.org/pandas-docs/stable/categorical.html#series-creation) to convert a column to categorical data type."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Crime Description"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The crime is described using the following columns:\n",
"* 'IUCR', 'Primary Type', 'Description': description of the type of crime. 'Description' is actually also a categorisation on a finer hierarchy level than 'Primary Type'. 'IUCR' is equivalent to 'Description', but a description in form of a number, instead of textual description.\n",
"* 'FBI Code': A different categorisation of the crime (cf. http://gis.chicagopolice.org/clearmap_crime_sums/crime_types.html)\n",
"* 'Arrest', 'Domestic': Boolean flags indicating whether an arrest was made, and if the incident was classified as domestic violence.\n",
"* 'Location Description': A textual description of the location.\n",
"\n",
"Apply the following operations on the location-related columns:\n",
"\n",
"|Column|Operation|\n",
"|--|--|\n",
"| 'IUCR' | Drop |\n",
"| 'Primary Type' | Categorical |\n",
"| 'Description' | Categorical |\n",
"| 'FBI Code' | Drop |\n",
"| 'Arrest' | Categorical |\n",
"| 'Domestic' | Categorical |\n",
"| 'Location Description' | Categorical |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have finished preprocessing the data, you can show some information about the resulting data frame using `df.head()`, `df.info()`.\n",
"In addition, you can use `df.describe()` to obtain some measures of central tendency and dispersion about the numeric columns.\n",
"\n",
"Can you already see something insightful from these simple statistics?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Bonus Task"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use `df.groupby(by=columns).size()` to count how often certain combinations in the given columns occur, i.e. use `columns=['Primary Type']` to get a list of the most frequent primary crime type."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Basic Visualization"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Histograms"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To visualize the distribution of categorical attributes, we will first use equi-width histograms. Create such for all categorical attributes.\n",
"\n",
"What form do these distribution exhibit? Are there distinct peaks? Can you guess reasons for these? What insights do you gain? How useful are the plots for the region information attributes?\n",
"\n",
"Hint: \n",
"\n",
"* You can create a histogram using \n",
"\n",
"```seaborn.catplot(data=df, x=, kind='count')```\n",
"\n",
"* Do not create the histograms for regions attributes smaller than `Ward`\n",
"* You can also pass an explicit order using the `order=` keyword of `seaborn.catplot`. What are useful options for ordering the value?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Quantile Plots"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* To create quantile plots, you can use `df.quantile(q=)`, where `p-values` is a list of quantiles you want to compute. The return value is a dataframe whose index contains the $p$ values, and the columns the corresponding quantiles for each numeric(!) column in the original dataframe.\n",
"* To get `num` equi-distant values between $0$ and $1$, you can use `numpy.linspace(start=0.0, stop=1.0, num=num)`.\n",
"* To create the plot, use `seaborn.scatterplot(data=quantiles, x=, y=)`\n",
"\n",
"Questions:\n",
"* What do you observe here?\n",
"* What benefit does a quantile plot offer compared to histograms?\n",
"* How does the quantile plot behave for discrete attributes?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Boxplot"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another option to display the distribution of data values of one attribute is the boxplot.\n",
"It also allows for easy comparison between the distribution in different subsamples.\n",
"\n",
"Tasks:\n",
"* Compare the distribution of `'hour'` for different values of `'Primary Type'`. (Is there are \"typical\" time for some kind of crime?)\n",
"* Compare the distribution of `'hour'` for different values of `'weekday'`.\n",
"* Again, you can use the `order` keyword to pass some explicit order. What are meaningful orderings? Do they facilitate understanding the plot?\n",
"* Think about/Try out more comparisons. Is there something you can discover?\n",
"* How meaningful are boxplots for small samples?\n",
"\n",
"Hint:\n",
"* You can use `plt.xticks(rotation=90)` to rotate the tick labels on the x-axis."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Violin Plot"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Compared to a boxplot, _violin plots_ replace the box by a kernel density estimate of the data distribution.\n",
"Thereby, they reveal more details about the distribution of the data.\n",
"You can create violin plots using\n",
"\n",
"```\n",
"seaborn.violinplot(data=df, x=, y=)\n",
"```\n",
"\n",
"Create a violin plot for\n",
"* `x='Primary Type'`, `y='hour'`; restrict 'Primary Type' to the 5 most common types.\n",
"* Explicitly set the bandwidth of the kernel to the values 1.0, 0.1 and 0.01. What do you observe?\n",
"\n",
"Hint:\n",
"* You can count the absolute frequency of a crime type using \n",
"\n",
"```df.groupby(by='Primary Type').agg({'Primary Type': 'count'})```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Advanced Visualization"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Scatterplot Matrix"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* To create a scatterplot matrix, you can use\n",
"```\n",
"seaborn.pairplot(data=data, kind='scatter', diag_kind='scatter')\n",
"```\n",
"* As this visualization is quite compute-intensive, you can call it on a subsample of the data using `df.sample(frac=)`, e.g. `frac=0.01`.\n",
"* pairplots takes an additional keyword argument `hue` to which you can pass the name of a (categorical) column. Afterwards, the points are coloured by this attribute.\n",
"* You can use `plot_kws={: }` to pass arguments to the plot function, e.g. `plot_kws={'alpha': .2}` to plot the points with transparency.\n",
"* You can use `vars=` to give an explicit order, or use only a subset of the columns. Experiment with different orderings."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When scattering many points, the scatter matrix may get cluttered, and it hard to detect areas where many points lie, as they overlap.\n",
"To cope with this situation, one can replace the scatter plots in each cell by 2-D histograms.\n",
"\n",
"For this we will use [`seaborn.PairGrid`](https://seaborn.pydata.org/generated/seaborn.PairGrid.html#seaborn.PairGrid). A pair grid defines the \"matrix\" as known from the scatter plot, and provides interfaces to map different visualizations to the specific cells.\n",
"For easier usage, we provide you the following method, which plots a equi-width 2-D histogram into a cell.\n",
"Moreover, it determines the number of bins in $x$ and $y$ direction to not exceed the number of different values, in order to allow displaying categorical variables.\n",
"Be aware that the order in that case does not necessarily convey any meaning!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def hist2d(x, y, color, label, max_bins=50, **kwargs):\n",
" x_bins = numpy.unique(x).size\n",
" x_bins = min(x_bins, max_bins)\n",
" y_bins = numpy.unique(y).size\n",
" y_bins = min(y_bins, max_bins)\n",
" bins = (x_bins, y_bins)\n",
" plt.hist2d(x=x, y=y, bins=bins)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, create the pairgrid `g` and map the method `hist2d` to the cells using `g.map(upper)`. As before, you can pass the keyword argument `vars` to the `PairGrid` constructor to select only a subset of columns.\n",
"You may also want to start with only a fraction of the data (i.e. using `df.sample(frac=))` as value for the `data` parameter).\n",
"However, you should be able to run the visualization even for the full dataset.\n",
"\n",
"* Look at the cell `(longitude, latitude)`. What can you observe? Also compare the plot against the shape of Chicago, e.g. from [OpenStreeMap](https://www.openstreetmap.org/relation/122604#map=11/41.8338/-87.7320&layers=N) or [Wikipedia](https://en.wikipedia.org/wiki/Chicago#/media/File:Chicago_community_areas_map.svg)\n",
"* Look at the cells of the region hierarchy. What can you discover here?\n",
"* What else can you see? Compare it against what you already saw in e.g. the histograms.\n",
"\n",
"Hint:\n",
"* You can use `plt.subplots_adjust(wspace=0.02, hspace=0.02)` to compact the grid."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"# TODO: Fill me"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pixel-Oriented Visualization"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will also take a look into a (simple) pixel-oriented technique.\n",
"Here, we have one subplot for each attribute, and each data points maps to one single pixel per window, where the colour indicates the value.\n",
"First, we create a `(n_features x n_features)` matrix of subplots by \n",
"\n",
"```figure, axes = plt.subplots(nrows=n_features, ncols=n_features)```\n",
"\n",
"In row $r$, we display the whole dataset sorted by the $r$th feature.\n",
"In order to avoid artifacts by having internal presorting in the data set, shuffle the rows first.\n",
"You can do both using\n",
"\n",
"```sorted_df = df.sample(frac=1.0).sort_values(by=df.columns[r])```\n",
"\n",
"To visualize the $c$th feature in column $c$ (of the subplots matrix), we have to bring the values into a rectangular array.\n",
"We use `width=512`, and infer the height by `height = int(numpy.ceil(n_samples / width))`.\n",
"You can use the following method to bring a column from a data frame into a rectangular array."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def prepare_cell(df_column, width=512):\n",
" n_samples = len(df_column)\n",
" height = int(numpy.ceil(n_samples / width))\n",
" buffer = numpy.empty(shape=(height, width)) \n",
" buffer[:] = float('nan')\n",
" if hasattr(df_column, 'cat'):\n",
" df_column = df_column.cat.codes\n",
" buffer.flat[:n_samples] = df_column.values\n",
" # buffer = buffer.reshape((height, width))\n",
" return buffer"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, you can use `axes[r, c].imshow(buffer, aspect='auto')` to show the data in `buffer` in row $r$, column $c$ of the matrix.\n",
"\n",
"Questions:\n",
"* Is there something you can observe? How do you recognise if there is some pattern in the data?\n",
"* Disable the sorting again. What is the effect on the readability of the visualization?\n",
"\n",
"Hints:\n",
"* Use `axes[r, c].set_xticks([])`, `axes[r, c].set_yticks([])` to disable tick labels on the axes.\n",
"* Use `axes[r, 0].set_ylabel(df.columns[r])` and `axes[-1, c].set_xlabel(df.columns[c])` to add x and y labels to you matrix.\n",
"* Use `plt.subplots_adjust(hspace=.02, wspace=.02)` to adjust the space between the cells."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Fill me"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}