In this tutorial we will explore how to extract plain text from PDFs, including Optical Character Recognition (OCR). OCR is a machine-learning technique used to transform images that contain text (e.g. a scan of a document) into actual text content. For a quick introduction to the mechanics of OCR, see the readings for this module.

Be sure to install all of the software required for this module.

Is the text already there?

Many PDFs already have plain text embedded in them, either because they were born-digital (i.e. created from a word processing document) or because OCR was already performed on them (e.g. JSTOR does this for all of the articles in their database). You can usually tell whether or not text is embedded in the PDF by attempting to select a short passage with your mouse. If you can select words and phrases on a page, then there is embedded text present in the document.

To extract embedded text from a PDF, we can use an application called pdftotext (part of the Xpdf package). From the terminal, execute the following command:

$ pdftotext /path/to/my/document.pdf myoutputfile.txt

This will create a new file called "myoutputfile.txt" in your current working directory. If you open it, you should see the text that pdftotext was able to extract from your PDF document. Remember, this is not OCR: we're just extracting text that is already embedded in the PDF file.

Nope. OCR it is.

If text isn't already embedded in the PDF, then you'll need to use OCR to extract the text. Tesseract is an excellent open-source engine for OCR. But it can't read PDFs on its own. So we'll need to do this in three steps:

  1. Convert the PDF into images: one image per page.
  2. Use OCR to extract text from each image.
  3. Stitch the text from each image (page) together into a single text file.

Convert PDF to images

A PDF is a jumble of instructions for how to render a document on a screen or page. Although it may contain images, a PDF is not itself an image, and therefore we can't perform OCR on it directly. To convert PDFs to images, we use ImageMagick's convert function.

The basic syntax to convert a PDF to images is:

$ convert -density 300 /path/to/my/document.pdf -depth 8 -strip -background white -alpha off file.tiff

There are several things going on here:

The resulting file, file.tiff in the example above, should be a multi-page TIFF file. For a 15-page PDF, you can expect the resulting TIFF to be around 300MB. 

Tesseract

Once you have a TIFF representation of your document, you can use Tesseract to (attempt to) extract plain text. The basic syntax is:

$ tesseract file.tiff output.txt

This tells Tesseract to perform OCR on file.tiff, and put the resulting text in output.txt. If your TIFF file contains multiple pages, Tesseract will sequentially append pages to your output file. 

By default, Tesseract assumes that your documents are in English. If you are working with documents in another language, use the "-l" flag. For example:

$ tesseract -l [lan] file.tiff output.txt

[lan] should be a three-letter language code. See the LANGUAGES section in the Tesseract documentation for a list of supported languages.