This is a simple console project that takes an url and downloads everything in it. including: css, js, images, svg, videos and even the page itself downloads as an html file. This project is similar to a web crawler and was created with the purpose of training. You can get ideas from it to build your projects and improve its performance. This project also has intentional flaws in its structure, if you are curious, it is good to download this project and make the necessary edits on it, and then introduce your own version of it. Also, there are problems in the error handling of this project, which are evident. Downloading this source code and making changes on it can give you a good understanding of web crawling and can also challenge and improve your refactoring skills.
The following items are used in this project:
- delegates
- exceptions
- statics
- extentions
- loops
- regular expressions
- webclient
- System.io
- inheritance
- global using c#
-
You can create a nuget package using these codes.
-
You can use these codes in other projects other than Console application.
-
You can improve the performance and the way to receive information and create files and specify their location after downloading.
-
You can create a xunit/nunit project for it and try to create it again but using tdd to develop your test writing skills.
-
You can build a structure on this project for practice so that you can connect it to all kinds of databases using orms or ado.net to sql or nosql databases.