If we could snapshot the whole repository daily then in the event of total server failure, we could just copy the most recent backup onto a live stand-in server. Or on discovery of a corrupted file, we could browse back through backups until we find the most recent undamaged version.
But in the real world, backing up a whole file repository takes time and system resources. So some egghead invented the "differential" backup scheme. It starts with a full backup, and then nightly or whatever, the backup process takes snapshots of files that changed since the last full backup. In the event of total server failure, the most current repository state would be reconstructed by restoring the full backup, and then applying the differences as recorded in the most recent differential backup. Given a file server capacity of S, the space required for a differential backup system will not exceed 2S: 1S for a full backup, plus 1S for a hypothetical differential representing 100% change.
The weakness of the basic differential scheme becomes evident when we have a file where valuable changes were made after the last full backup, and then the file became corrupted before a differential backup. You could retain a catalog of differential snapshots, but the storage demands will increase exponentially.
So another egghead invented the "incremental" backup scheme. Again, this starts with a full backup. Then, instead of rewriting a snapshot of all differences since the "full" backup, the incremental backup process snapshots only those files changed since last snapshot. This protects against the loss of valuable changes. In most file libraries, only a small subset of files change from day to day, so the incremental strategy tends to mitigate the exponential demand for backup storage space.
In an environment of daily system-wide changes, all backup strategies converge with daily full backups.