A Nodejs utility program that accept a URL crawls all the links under the same domain and takes screenshots when load finishes.
Following npm scripts
This command will build the project and run the build
This command will start the project in development mode Changes will be redeployed using nodemon
This command will build the project
npm install super-shots --save
create a file and coppy below contents
import SuperShots from 'super-shots';
async function runCrawler(): Promise<void> {
const superShots = new SuperShots();
await superShots.initialize();
const url = 'https://example.com'; // Replace with your desired URL
await superShots.crawlAndScreenshot(url);
await superShots.close();
}
runCrawler()
.then(() => console.log('Crawl and screenshot complete.'))
.catch((error) => console.error('Error:', error));
Command line usage
npm install super-shots -g --save
$ super-shots https://example.com
Contributions are always welcome!
See contributing.md
for ways to get started.
Please adhere to this project's code of conduct
.