# Basic Concepts

In Crawlab, you can do awesome things like creating and running distributed web crawlers, monitoring realtime logs and viewing crawled data, or scheduling periodic tasks in a cron-style fashion, or configuring concurrency for each node. What have made those features available in Crawlab, and how do they work?

In this section, we are going to introduce some basic concepts for better understanding how to use Crawlab comprehensively and efficiently.