Clean, filter and sample URLs to optimize data collection – Python & command-line – Deduplication, spam, content and language filters
-
Updated
Dec 30, 2024 - Python
Clean, filter and sample URLs to optimize data collection – Python & command-line – Deduplication, spam, content and language filters
Remove clutter from URLs and return a canonicalized version
A robust, modular web crawler built in Python for extracting and saving content from websites. This crawler is specifically designed to extract text content from both HTML and PDF files, saving them in a structured format with metadata.
Add a description, image, and links to the url-normalization topic page so that developers can more easily learn about it.
To associate your repository with the url-normalization topic, visit your repo's landing page and select "manage topics."