A Jupyter-based tool to help parse out structured text from a PDF document and explore the contents.
- Install pixi
pixi run lab
- Launch jupyter lab with
pixi run lab
- Open and run
notebooks/iPyPDF.ipynb
see DEVELOPMENT.md
Within the GUI are 3 panels, Table of Contents, PDF viewer, and Tools. In this section we are going over all of the various options available in the tools panel.
This tab contains tools which will iterate through each page of the pdf.
Text Only
: Runs each page through Tesseract to obtain plain text.Parse Layout
: Uses layoutparser to label portions of the document as either (title, text, image, or table). The sections are then assembled together using a few simple rules in order to appoximate a shallow content hierarchy.- Title and Text blocks are cropped out and sent through Tesseract to obtain the text.
- Tables are processed using a rule-based table parsing scheme described here.
- Image blocks have no additional processing.
Notice that section 3 is missing. The process is not perfect. In this case, a section title was mislabled by layoutparser as standard text. Mistakes like this are fairly common. To correct them, you can edit the table of contents using the arrow keys (the cursor must be hovering over the table of contents).
Folders
, PDF Documents
, and Sections
have a tab labeled Cytoscape
. This runs a tfidf similarity calculation over all nodes beneath the selected item. I.e. if you select the root node, then all defined nodes will be included in the calculation. However, only those with a link to another node will be drawn (this is for speed, may change this in the future).
The color of each node denotes the pdf document it originated from.
Selecting a node in the graph will highlight the node in the DocTree
. Clicking the node in the DocTree
will render the first page of the node.
Extracts named entities from the selected branch of the document tree. I.e.,
the raw text is compiled from a depth first search on whichever node is selected
in the table of contents. Then, spacy.nlp(text).ents
returns the named entities
found within the section.
I recommend turning off
Show Boxes
as this changes pages every time you add a node (working on a better solution)
Each node has a specific set of tools available to use. Here are the tools provided when a Section
node is selected.
Starting from the left:
Add Section Node
adds a sub-node of typeSection
and selects itAdd Text Node
adds a sub-node of typeText
and selects itAdd Image Node
...Delete Node
Delete the selected node and all of its children
Content is extracted from the rendered image. Text is extracted using Optical Character Recognition (OCR). Images don't do any image analysis, they just denote coordinates and page number so that they can be retreived later if need be.
When a Section
node is selected, the selection tool will attempt to parse text from the portion of the page selected by the user. This text will overwrite the label assigned to the node.
When a Text
node is selected, the selection tool will attempt to parse text from the selected area and append it to the node's content. This is because text blocks are not always perfectly rectangular, and often span multiple pages.
When an Image
node is selected, the coordinates of the box are appended to the node's content.
This will generate json
files for each document. When the tool is initialized, these are used to reconstruct the table of contents. You can also use the json file directly.