summaryrefslogtreecommitdiff
path: root/content/posts/adaptive-binarisation/index.md
blob: 4266343f6ee5afe8c952c7d3b811b599e1c3318f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
title: "Adaptive Binarisation"
date: 2019-12-17
draft: true
categories: [binarisation, preprocessing, image manipulation]
---
The [previous post](/posts/binarisation-introduction) covered the
basics of binarisation, and introduced the Otsu algorithm, a good
method for finding a global threshold number for a page. But there
are inevitable limitations with using a global threshold for
binarisation. Better would be to use a threshold that is adapted
over different regions of the page, so that as the conditions of the
page change so can the threshold. This technique is called adaptive
binarisation.

For each pixel of an image, adaptive binarisation considers the
pixels around it to determine a good threshold. This means that even
in an area which is heavily shaded, for example near the spine of a
book, the text will be correctly differentiated from the background,
as even though they may both be darker than the text in the rest of
the page, it is the darkness relative to its surroundings that
matters.

<!--
(diagram showing 2 different areas of a page, one light and one dark,
comparing global and local thresholding [can be fake, as the global
threshold diagram was])
(actually can probably just have a dark area of a page, comparing global
and local thresholding, setting the global one such that the image is
screwed up)
-->