News‎ > ‎

The Duplicati 2.0 storage engine is here!

posted Sep 13, 2013, 9:11 AM by Rene Stach

This is our second experimental release and it features the new storage engine that Duplicati 2.0 will be based on. This storage engine addresses many wishes of our users and provides a solid basis for future feature developments. A rough overview of the changes has already been given in the pre-announcement. Also the new features of the first experimental release are included.


The new storage engine in this release is completely different to Duplicati 1.3 and it is not compatible with your old backups anymore. You have to set up a new backup and store it in a new folder to try Duplicati 2.0. This will not change when we have a final Duplicati 2.0. 2.0 will not be compatible with 1.3!


A short explanation what the difference is: Imagine your local files consist of many small bricks in different shapes and colors. Duplicati takes your files, breaks them down into single bricks and stores these bricks in small bags. Whenever a bag is full, it is stored in a huge box (which is your online storage). When something changes, Duplicati puts new bricks into a new bag and puts it into the box. When a local file needs to be restored, Duplicati knows what bricks it needs and in which bags these are. So, it grabs the required bags, takes out the bricks and rebuilds your file. If the file is still on your computer (in a version you do not want anymore), Duplicati can just replace the wrong bricks, thus updating the existing file.


From time to time, Duplicati will notice that there are a few bags that contain bricks it does not need anymore. It grabs those bags, sorts the bricks and puts the required bricks into new bags, and puts them back into the box. Duplicati will also notice bags that only contain a very small number of bricks. Duplicati grabs those bags, takes out all bricks, puts them into one new bag and puts this into the box.


I think, this pretty much explains how the new storage basically works. And to say the good news again: There is no need to upload full backups regularly. This makes Duplicati a perfect choice for incremental backups of large media libraries. We have published more technical details in a document block based storage format (for secure, online backups).

As the storage is so different, we also had to come up with a new command line interface. While the command line interface of Duplicati 1.3 was very technical the new interface was designed to be much easier to use. Let me give you an example:


backup ftp://me:ghZtgd5S@example.com/backup D:\Documents D:\Movies


This simple command will make a lot for you. It will make a backup of two local folders to the specified FTP server. It will delete old backup data (default is to keep everything for a month). Then it will - if required - automatically compact the backup, i.e. it will replace some of the files to get rid of deleted data and to merge small files into larger ones. Finally, it will download a sample of backups files and check their integrity.


When something needs to be restored, it is as simple as the backup. The following command will restore all files in the backup in their latest version to their original destination.


restore ftp://me:ghZtgd5S@example.com/backup *


Some facts about the new command line interface: Multiple folders are simply separated with a space now. All settings for the remote storage can be part of the URL, so that it is easy to share server settings. Yes, “backup” is the only command that users need; you do not have to answer the questions “When will I make the next full backup?” and “What are the right settings to delete old backups?”. It got much simpler than before. We hope that the new defaults please most of the users (50MB maximum file size, all backup data is stored at least one month before it can be deleted). More command line functions can be found using the ‘help’ command.


Duplicati now has 7z/LZMA2 compression. 7z/LZMA2 is stronger than zip/deflate but it requires much more resources in terms of memory and CPU power. That is why we decided to keep zip/deflate as default and make 7z/LZMA2 optional. 7z/LZMA2 uses multiple CPU cores, so you might want to try this if storage space is important and you have the CPU power. To speed up 7z/LZMA2 even more, we decided to implement a filter that excludes specific file types from the compression. Those files will be directly written into the 7z file. We added a list of file types that contain compressed data. These files will not be compressed a second time. The list currently contains file types like JPEG files, zip files, audio and movie containers, compressed office documents etc. Please have a look into the help file to get more information how to efficiently use the new 7z/LZMA2 compression.


As said, this is an experimental build. We have tested all features on our computers and they seemed to be working fine. We cannot guarantee that there are any major issues in this version. I.e. you should not make backups of really important data that you really rely on! If you find out that something is not working as expected, please get in touch with us. To analyse what is going wrong, you might want to attach the output of the following command to your issue report or email.


create-report ftp://me:ghZtgd5S@example.com/backup D:\logfile


Although the technical changes are really huge to make online backups much more reliable and comfortable, the new version is still built on a strict Trust-No-One (TNO) concept: All backup data is encrypted on your local computer before it is uploaded anywhere. If you choose a strong password (the longer the better), you can probably publish your backup on the internet and it is still safer than the original data on your computer. Duplicati is built to protect your data and privacy!

Comments