Duplicati 2.3 Stable: More Power, New Cloud Backups, and a Big Price Drop

Duplicati 2.3 Stable has officially launched, introducing cloud suite support, multi-destination backups, and ransomware protection through immutable storage. Alongside these features, the company has slashed console subscription prices by over 50% while refocusing the free tier on smaller, local setups.

News

Duplicati 2.3 Stable: More Power, New Cloud Backups, and a Big Price Drop

Duplicati 2.3 Stable has officially launched, introducing cloud suite support, multi-destination backups, and ransomware protection through immutable storage. Alongside these features, the company has slashed console subscription prices by over 50% while refocusing the free tier on smaller, local setups.

News

Introducing Duplicati Index

Duplicati Index is a new feature that transforms static backups into a searchable knowledge base, allowing users to find files or query data with AI models without needing to perform a manual restore. By integrating with automation tools while maintaining existing encryption, it shifts the role of backups from passive disaster recovery to an active, usable resource for daily workflows.

News

Introducing Duplicati Index

Duplicati Index is a new feature that transforms static backups into a searchable knowledge base, allowing users to find files or query data with AI models without needing to perform a manual restore. By integrating with automation tools while maintaining existing encryption, it shifts the role of backups from passive disaster recovery to an active, usable resource for daily workflows.

News

Every Enterprise Pays Twice for Data. Here’s Why

There is a quiet inefficiency sitting inside almost every enterprise today, and it rarely shows up on a dashboard. It doesn’t look like a broken system. In fact, on paper, everything seems well-structured. Data is stored, compliant, and backed up. Analytics teams have pipelines, warehouses, and dashboards. AI teams are experimenting with models and buying datasets to improve them. But if you trace how data actually moves through the company, a pattern starts to emerge. The same organization is paying once to store its own history, and then paying again to buy someone else’s version of it...

Article

Every Enterprise Pays Twice for Data. Here’s Why

There is a quiet inefficiency sitting inside almost every enterprise today, and it rarely shows up on a dashboard. It doesn’t look like a broken system. In fact, on paper, everything seems well-structured. Data is stored, compliant, and backed up. Analytics teams have pipelines, warehouses, and dashboards. AI teams are experimenting with models and buying datasets to improve them. But if you trace how data actually moves through the company, a pattern starts to emerge. The same organization is paying once to store its own history, and then paying again to buy someone else’s version of it...

Article

What If You Could Ask Your Company: What Actually Worked in 2018?

There’s a moment that happens inside almost every company once it starts taking AI seriously. A team is trying to build something new. Maybe it’s a model to predict churn, or an agent that helps support reps, or a trading strategy that adapts faster to market changes. The first instinct is always the same: pull data from Snowflake, maybe pipe in some logs from S3, stitch together what’s available, and start experimenting in Python with Pandas, PyTorch, or whatever stack the team prefers. And then someone asks a deceptively simple question: “Didn’t we try something like this before?” That’s where things break. Because the real answer isn’t in Snowflake. It isn’t in your dashboards. It isn’t even in your current data pipelines. The answer exists, but it lives somewhere far less accessible: in backups, in old environments, in historical states of the company that no system was ever designed to query. And today, there is no way to ask that question properly...

Article

What If You Could Ask Your Company: What Actually Worked in 2018?

There’s a moment that happens inside almost every company once it starts taking AI seriously. A team is trying to build something new. Maybe it’s a model to predict churn, or an agent that helps support reps, or a trading strategy that adapts faster to market changes. The first instinct is always the same: pull data from Snowflake, maybe pipe in some logs from S3, stitch together what’s available, and start experimenting in Python with Pandas, PyTorch, or whatever stack the team prefers. And then someone asks a deceptively simple question: “Didn’t we try something like this before?” That’s where things break. Because the real answer isn’t in Snowflake. It isn’t in your dashboards. It isn’t even in your current data pipelines. The answer exists, but it lives somewhere far less accessible: in backups, in old environments, in historical states of the company that no system was ever designed to query. And today, there is no way to ask that question properly...

Article

The Best AI Models Are Trained on Your Own Company, Not the Internet

There’s a quiet assumption baked into most enterprise AI strategies right now: if you want better models, you go get more data. Usually that means buying datasets, subscribing to data marketplaces, or scraping the internet harder. That assumption is wrong. The highest-quality dataset your company will ever have is the one it already generated: years of decisions, mistakes, experiments, and outcomes. The problem is not that this data doesn’t exist. The problem is that it’s trapped in systems that were never designed to be used for learning. Most teams feel this gap every day, even if they don’t describe it that way...

Article

The Best AI Models Are Trained on Your Own Company, Not the Internet

There’s a quiet assumption baked into most enterprise AI strategies right now: if you want better models, you go get more data. Usually that means buying datasets, subscribing to data marketplaces, or scraping the internet harder. That assumption is wrong. The highest-quality dataset your company will ever have is the one it already generated: years of decisions, mistakes, experiments, and outcomes. The problem is not that this data doesn’t exist. The problem is that it’s trapped in systems that were never designed to be used for learning. Most teams feel this gap every day, even if they don’t describe it that way...

Article

Every Company Already Has a Digital Twin. It’s Just Locked in Backups

Most companies think of a “digital twin” as something futuristic. Something you build with simulations, sensors, or expensive infrastructure. Something that belongs in manufacturing, not in a law firm, a hedge fund, or a SaaS company. But if you zoom in on how companies actually operate day to day, something more interesting shows up: You are already generating a complete, time-indexed record of how your business works. It’s just not being used...

Article

Every Company Already Has a Digital Twin. It’s Just Locked in Backups

Most companies think of a “digital twin” as something futuristic. Something you build with simulations, sensors, or expensive infrastructure. Something that belongs in manufacturing, not in a law firm, a hedge fund, or a SaaS company. But if you zoom in on how companies actually operate day to day, something more interesting shows up: You are already generating a complete, time-indexed record of how your business works. It’s just not being used...

Article

Why Companies Spend Millions on Data They Already Own

There’s a moment that happens inside almost every data-driven company that is trying to adopt AI. A team decides they need better data to train models. They open up procurement, start evaluating vendors, and within a few weeks they are paying for access to datasets through marketplaces—credit card transactions, satellite imagery, sentiment feeds, supply chain signals. If they are already using Snowflake, the path is even easier. The Snowflake Data Marketplace makes it feel like progress: click, subscribe, query. It looks like momentum. It feels like sophistication. It is almost always the wrong starting point. Because at the exact same time, in the same company, there are years—often decades—of proprietary data sitting in backups, untouched. Not because it lacks value, but because the systems that store it were never designed to make it usable...

Article

Why Companies Spend Millions on Data They Already Own

There’s a moment that happens inside almost every data-driven company that is trying to adopt AI. A team decides they need better data to train models. They open up procurement, start evaluating vendors, and within a few weeks they are paying for access to datasets through marketplaces—credit card transactions, satellite imagery, sentiment feeds, supply chain signals. If they are already using Snowflake, the path is even easier. The Snowflake Data Marketplace makes it feel like progress: click, subscribe, query. It looks like momentum. It feels like sophistication. It is almost always the wrong starting point. Because at the exact same time, in the same company, there are years—often decades—of proprietary data sitting in backups, untouched. Not because it lacks value, but because the systems that store it were never designed to make it usable...

Article

Backup Companies Optimized for Disaster Recovery Will Lose the AI Era

There is a moment that happens inside almost every enterprise once they seriously try to build AI. It usually starts in a place that feels modern. A data team is working in Snowflake or Databricks. Models are being prototyped in Python using Pandas, PyTorch, or XGBoost. Product teams are piping logs into S3. Maybe there is even a vector database like Pinecone or pgvector starting to take shape. On the surface, it looks like a company that is becoming “AI-native”...

Article

Backup Companies Optimized for Disaster Recovery Will Lose the AI Era

There is a moment that happens inside almost every enterprise once they seriously try to build AI. It usually starts in a place that feels modern. A data team is working in Snowflake or Databricks. Models are being prototyped in Python using Pandas, PyTorch, or XGBoost. Product teams are piping logs into S3. Maybe there is even a vector database like Pinecone or pgvector starting to take shape. On the surface, it looks like a company that is becoming “AI-native”...

Article

Snowflake Stores Your Data. It Doesn’t Understand Your History.

Most teams that rely on Snowflake believe they already have their data problem solved. If you walk into a modern company—especially anything even slightly data-driven—you’ll find a familiar stack: product events flowing through Segment or RudderStack, landing in S3, modeled through dbt, and ultimately queried in Snowflake. Dashboards sit on top in tools like Tableau or Looker. Analysts write SQL, product managers check funnels, and leadership reviews weekly metrics...

Article

Snowflake Stores Your Data. It Doesn’t Understand Your History.

Most teams that rely on Snowflake believe they already have their data problem solved. If you walk into a modern company—especially anything even slightly data-driven—you’ll find a familiar stack: product events flowing through Segment or RudderStack, landing in S3, modeled through dbt, and ultimately queried in Snowflake. Dashboards sit on top in tools like Tableau or Looker. Analysts write SQL, product managers check funnels, and leadership reviews weekly metrics...

Article

Your Product Logs Are a Better Dataset Than Anything You Can Buy

If you walk into almost any SaaS company today, the data stack looks deceptively modern. Product teams live inside Mixpanel or Amplitude, watching funnels and retention curves. Support teams operate out of Zendesk or Intercom, handling thousands of tickets that quietly capture every edge case the product fails to explain. Engineering has logs flowing through Datadog, maybe stored in S3, sometimes piped into Snowflake if someone cared enough to model them. And somewhere in the background, backups are running through systems like Duplicati, quietly storing everything that matters and nothing that is actually used. On paper, this looks like a company that is “data-driven.” In practice, all of these systems are optimized for observation, not learning. The product analytics stack tells you what happened. It does not help you build something that improves on it...

Article

Your Product Logs Are a Better Dataset Than Anything You Can Buy

If you walk into almost any SaaS company today, the data stack looks deceptively modern. Product teams live inside Mixpanel or Amplitude, watching funnels and retention curves. Support teams operate out of Zendesk or Intercom, handling thousands of tickets that quietly capture every edge case the product fails to explain. Engineering has logs flowing through Datadog, maybe stored in S3, sometimes piped into Snowflake if someone cared enough to model them. And somewhere in the background, backups are running through systems like Duplicati, quietly storing everything that matters and nothing that is actually used. On paper, this looks like a company that is “data-driven.” In practice, all of these systems are optimized for observation, not learning. The product analytics stack tells you what happened. It does not help you build something that improves on it...

Article

Why Enterprise Search Is a Dead End for AI

Most teams that buy enterprise search tools like Glean are not actually trying to “search.” They are trying to answer questions about how their company works, why certain decisions were made, and what is likely to happen next. Search feels like progress because it surfaces information faster, but it quietly locks companies into a shallow interaction with their own data. It retrieves documents. It does not understand systems. If you look closely at how modern teams operate, the gap becomes obvious...

Article

Why Enterprise Search Is a Dead End for AI

Most teams that buy enterprise search tools like Glean are not actually trying to “search.” They are trying to answer questions about how their company works, why certain decisions were made, and what is likely to happen next. Search feels like progress because it surfaces information faster, but it quietly locks companies into a shallow interaction with their own data. It retrieves documents. It does not understand systems. If you look closely at how modern teams operate, the gap becomes obvious...

Article

Factories Have 20 Years of Failure Data. None of It Trains Their AI Models

Walk into a modern factory and the story sounds familiar. There is no shortage of data. Machines stream logs into historians. Maintenance teams record work orders in systems like SAP PM or IBM Maximo. Sensors feed dashboards built on tools like Ignition, OSIsoft PI, or even custom pipelines into Snowflake or Databricks. When something breaks, there is usually a record somewhere—often several...

Article

Factories Have 20 Years of Failure Data. None of It Trains Their AI Models

Walk into a modern factory and the story sounds familiar. There is no shortage of data. Machines stream logs into historians. Maintenance teams record work orders in systems like SAP PM or IBM Maximo. Sensors feed dashboards built on tools like Ignition, OSIsoft PI, or even custom pipelines into Snowflake or Databricks. When something breaks, there is usually a record somewhere—often several...

Article

Your HR System Knows Who’s Going to Quit Before They Do

Most HR teams already feel this, even if they can’t quite prove it. A manager flags that someone seems disengaged. A performance review starts slipping in tone. Internal Slack messages get shorter, less frequent. A high performer suddenly stops contributing in meetings. By the time attrition shows up in a dashboard, the decision has already been made weeks earlier. The frustrating part is not that the signal doesn’t exist. It’s that it lives in too many places, none of which were designed to work together...

Article

Your HR System Knows Who’s Going to Quit Before They Do

Most HR teams already feel this, even if they can’t quite prove it. A manager flags that someone seems disengaged. A performance review starts slipping in tone. Internal Slack messages get shorter, less frequent. A high performer suddenly stops contributing in meetings. By the time attrition shows up in a dashboard, the decision has already been made weeks earlier. The frustrating part is not that the signal doesn’t exist. It’s that it lives in too many places, none of which were designed to work together...

Article

Every Case Your Firm Has Ever Won Is Locked in a Backup

In most law firms, the way legal work actually gets done has not changed nearly as much as the tooling suggests. A litigation team preparing for a new matter still starts by pulling cases from Westlaw or LexisNexis, digging through iManage or NetDocuments to find prior briefs, scanning email threads in Outlook for context, and asking around internally to see if anyone has “seen something like this before.” From there, they assemble a draft, circulate it, revise it, and slowly converge on a strategy...

Article

Every Case Your Firm Has Ever Won Is Locked in a Backup

In most law firms, the way legal work actually gets done has not changed nearly as much as the tooling suggests. A litigation team preparing for a new matter still starts by pulling cases from Westlaw or LexisNexis, digging through iManage or NetDocuments to find prior briefs, scanning email threads in Outlook for context, and asking around internally to see if anyone has “seen something like this before.” From there, they assemble a draft, circulate it, revise it, and slowly converge on a strategy...

Article

Hospitals Store Decades of Patient Data. None of It Trains Their AI.

Walk into any hospital system today and you will find two completely different data worlds operating side by side. On one side is the clinical stack. Electronic health records in Epic or Cerner. Imaging systems storing radiology scans. Lab systems capturing results over years. Research teams exporting slices of this data into Python notebooks, building models to predict readmissions, detect anomalies, or assist with diagnosis. On the other side is the backup layer. Nightly snapshots of everything. Years of patient records, clinical notes, imaging metadata, billing history, and operational data stored for compliance. Locked, encrypted, and rarely touched unless something breaks...

Article

Hospitals Store Decades of Patient Data. None of It Trains Their AI.

Walk into any hospital system today and you will find two completely different data worlds operating side by side. On one side is the clinical stack. Electronic health records in Epic or Cerner. Imaging systems storing radiology scans. Lab systems capturing results over years. Research teams exporting slices of this data into Python notebooks, building models to predict readmissions, detect anomalies, or assist with diagnosis. On the other side is the backup layer. Nightly snapshots of everything. Years of patient records, clinical notes, imaging metadata, billing history, and operational data stored for compliance. Locked, encrypted, and rarely touched unless something breaks...

Article

Quant Firms Are Sitting on the Most Valuable Dataset They Don’t Use

A typical quant research workflow today is highly optimized, but narrowly scoped. A researcher pulls tick data from kdb+, joins it with alternative datasets sourced through Snowflake or a marketplace, engineers features in Python, and runs experiments tracked in MLflow or Weights & Biases. Results are pushed into internal dashboards and debated with portfolio managers. This system is fast, sophisticated, and expensive—and yet it is incomplete, because it systematically excludes the most valuable dataset the firm already owns: its own history of decisions.

Article

Quant Firms Are Sitting on the Most Valuable Dataset They Don’t Use

A typical quant research workflow today is highly optimized, but narrowly scoped. A researcher pulls tick data from kdb+, joins it with alternative datasets sourced through Snowflake or a marketplace, engineers features in Python, and runs experiments tracked in MLflow or Weights & Biases. Results are pushed into internal dashboards and debated with portfolio managers. This system is fast, sophisticated, and expensive—and yet it is incomplete, because it systematically excludes the most valuable dataset the firm already owns: its own history of decisions.

Article

Beyond Backup: The Evolution of Institutional Memory

Duplicati is evolving from a trusted open-source backup tool into AI-native training data infrastructure, designed to turn cold archives into searchable, vectorized datasets for modern AI workflows. By bridging the gap between historical storage and operational analytics, we are helping organizations unlock their institutional memory to build more intelligent, data-driven systems.

News

Beyond Backup: The Evolution of Institutional Memory

Duplicati is evolving from a trusted open-source backup tool into AI-native training data infrastructure, designed to turn cold archives into searchable, vectorized datasets for modern AI workflows. By bridging the gap between historical storage and operational analytics, we are helping organizations unlock their institutional memory to build more intelligent, data-driven systems.

News

Secure Your AI's Brain: How to Back Up OpenClaw with Duplicati

Protect your AI’s long-term memory and sensitive credentials from hardware failure or accidental loss. Learn how to use Duplicati’s military-grade encryption to create secure, automated backups of your OpenClaw (formerly Moltbot) instance today.

News

Secure Your AI's Brain: How to Back Up OpenClaw with Duplicati

Protect your AI’s long-term memory and sensitive credentials from hardware failure or accidental loss. Learn how to use Duplicati’s military-grade encryption to create secure, automated backups of your OpenClaw (formerly Moltbot) instance today.

News

The Local Database Explained: How Duplicati Tracks Your Backups and Recovers When Needed

Duplicati’s encrypted backups are intentionally opaque, so the local SQLite database acts as the essential “map” that makes fast incrementals and restores possible. This article shows how Duplicati rebuilds that map from remote index files (or, if needed, full data scans) and explains the last-resort RecoveryTool that can salvage files even in worst-case disasters. 

Blogs

The Local Database Explained: How Duplicati Tracks Your Backups and Recovers When Needed

Duplicati’s encrypted backups are intentionally opaque, so the local SQLite database acts as the essential “map” that makes fast incrementals and restores possible. This article shows how Duplicati rebuilds that map from remote index files (or, if needed, full data scans) and explains the last-resort RecoveryTool that can salvage files even in worst-case disasters. 

Blogs

Introducing Machine Tagging in the Duplicati Console

Duplicati is built in close collaboration with its community. Join the forum to discuss use cases, report issues, suggest improvements, or follow ongoing development.

News

Introducing Machine Tagging in the Duplicati Console

Duplicati is built in close collaboration with its community. Join the forum to discuss use cases, report issues, suggest improvements, or follow ongoing development.

News

Introducing Intelligent Scheduled Reports in the Duplicati Monitoring Console

Duplicati’s Monitoring Console now delivers AI-generated summary reports on a schedule you define, giving you instant insight into recent backup activity. Customize the focus, language, and delivery channel to get exactly the information your team needs - right when you need it.

News

Introducing Intelligent Scheduled Reports in the Duplicati Monitoring Console

Duplicati’s Monitoring Console now delivers AI-generated summary reports on a schedule you define, giving you instant insight into recent backup activity. Customize the focus, language, and delivery channel to get exactly the information your team needs - right when you need it.

News

Duplicati: Zero-Trust Backups with Keep-Your-Own-Keys (BYOK / CMK)

Encrypt before you trust. Duplicati’s keep-your-own-keys (BYOK/CMK) design ensures zero-trust backups—only you hold the keys, and no one else can read your data.

Blog

Duplicati: Zero-Trust Backups with Keep-Your-Own-Keys (BYOK / CMK)

Encrypt before you trust. Duplicati’s keep-your-own-keys (BYOK/CMK) design ensures zero-trust backups—only you hold the keys, and no one else can read your data.

Blog

Efficient Storage with Block-Level Deduplication in Duplicati

Duplicati minimizes backup size through block-level deduplication, storing each unique data block only once across all versions. This design saves storage space, reduces upload time, and keeps encrypted backups efficient even as data changes over time.

Blog

Efficient Storage with Block-Level Deduplication in Duplicati

Duplicati minimizes backup size through block-level deduplication, storing each unique data block only once across all versions. This design saves storage space, reduces upload time, and keeps encrypted backups efficient even as data changes over time.

Blog

Duplicati 2.2 - New Stable Release

We’re excited to announce the next stable version of Duplicati 2.2, bringing a redesigned interface, major performance improvements, and new storage options.

News

Duplicati 2.2 - New Stable Release

We’re excited to announce the next stable version of Duplicati 2.2, bringing a redesigned interface, major performance improvements, and new storage options.

News

Using Duplicati Retention Rules for GFS-Style Backups

Duplicati’s retention policies make it easy to implement a Grandfather-Father-Son strategy without manual rotation.

Blog

Using Duplicati Retention Rules for GFS-Style Backups

Duplicati’s retention policies make it easy to implement a Grandfather-Father-Son strategy without manual rotation.

Blog

Unleashing the power of SQLite to C#

SQLite is central to how Duplicati tracks and manages backup data. Unlocking high performance required targeted tuning to handle large datasets and heavy query loads efficiently.

Blog

Unleashing the power of SQLite to C#

SQLite is central to how Duplicati tracks and manages backup data. Unlocking high performance required targeted tuning to handle large datasets and heavy query loads efficiently.

Blog

Cut restore times by 3.8x - A deep dive into our new restore flow

A reworked restore flow significantly improves performance, reducing downtime when recovery counts.

Blog

Cut restore times by 3.8x - A deep dive into our new restore flow

A reworked restore flow significantly improves performance, reducing downtime when recovery counts.

Blog

Stable release of Duplicati 2.x

After years in beta, Duplicati has finally moved forward. New resources, community effort, and major feature additions have broken the long-standing release cycle.

News

Stable release of Duplicati 2.x

After years in beta, Duplicati has finally moved forward. New resources, community effort, and major feature additions have broken the long-standing release cycle.

News

Tech Deep Dive: Taming CPU Utilization in Duplicati

New CPU pressure limits make it possible to control Duplicati’s resource usage and reduce system impact.

News

Tech Deep Dive: Taming CPU Utilization in Duplicati

New CPU pressure limits make it possible to control Duplicati’s resource usage and reduce system impact.

News

Announcing Remote Backup Management in the Duplicati Console

A major update brings centralized remote backup management to the Duplicati Console. IT teams can now deploy, manage, monitor, and restore machines at scale from a single place.

News

Announcing Remote Backup Management in the Duplicati Console

A major update brings centralized remote backup management to the Duplicati Console. IT teams can now deploy, manage, monitor, and restore machines at scale from a single place.

News

Introducing secrets with 10 different providers

New secret providers expand Duplicati’s options for securely injecting sensitive data, with broad support available from the start.

News

Introducing secrets with 10 different providers

New secret providers expand Duplicati’s options for securely injecting sensitive data, with broad support available from the start.

News

Tech Deep Dive: Encrypting backups without slowdown

A fix to AES header IV generation delivered up to a 1.85× speedup, making encryption effectively free in this case.

Blog

Tech Deep Dive: Encrypting backups without slowdown

A fix to AES header IV generation delivered up to a 1.85× speedup, making encryption effectively free in this case.

Blog

Tech Deep Dive: Tuning for 1000x speedup

A performance bottleneck surfaced during extensive benchmarking. Tracking it down led to a targeted fix that significantly reduced overall runtime.

Blog

Tech Deep Dive: Tuning for 1000x speedup

A performance bottleneck surfaced during extensive benchmarking. Tracking it down led to a targeted fix that significantly reduced overall runtime.

Blog

Migrating apps in DigitalOcean with 30x less downtime

A live infrastructure migration had to be done with no safe downtime window. By working around DNS and TLS constraints, downtime was reduced from minutes to mere seconds without losing data.

Blog

Migrating apps in DigitalOcean with 30x less downtime

A live infrastructure migration had to be done with no safe downtime window. By working around DNS and TLS constraints, downtime was reduced from minutes to mere seconds without losing data.

Blog

Why use JWT for authentication tokens (hint: it’s not performance)

A common security shortcut is put under the microscope. Relying on custom session tokens instead of proven standards like JWT can quietly shift the advantage to attackers.

Blog

Why use JWT for authentication tokens (hint: it’s not performance)

A common security shortcut is put under the microscope. Relying on custom session tokens instead of proven standards like JWT can quietly shift the advantage to attackers.

Blog

Secure by design: Using hashing and encryption to provide tamper-resistant, verifiable backups

A request for a security-focused overview revealed a gap in existing documentation. Duplicati’s architecture and design choices are broken down to show how they work together to maximize security.

Blog

Secure by design: Using hashing and encryption to provide tamper-resistant, verifiable backups

A request for a security-focused overview revealed a gap in existing documentation. Duplicati’s architecture and design choices are broken down to show how they work together to maximize security.

Blog

Duplicati Quarterly Update

Three months in, Duplicati Inc. reflects on a fast-moving first quarter. From a major open-source beta release to a refreshed website, the project has already hit several key milestones.

News

Duplicati Quarterly Update

Three months in, Duplicati Inc. reflects on a fast-moving first quarter. From a major open-source beta release to a refreshed website, the project has already hit several key milestones.

News

Securing a JSON file with a hidden signature

A routine upgrade uncovered a fragile assumption in an old file format. When .NET8 removed the “one golden zip,” the lack of a version field turned a simple change into a breaking problem.

Blog

Securing a JSON file with a hidden signature

A routine upgrade uncovered a fragile assumption in an old file format. When .NET8 removed the “one golden zip,” the lack of a version field turned a simple change into a breaking problem.

Blog

Migrating from .NET 4 to .NET 8 in 300+ commits

Duplicati is a free and open source backup client that securely stores encrypted, incremental, compressed backups on cloud storage services and remote file servers.

Blog

Migrating from .NET 4 to .NET 8 in 300+ commits

Duplicati is a free and open source backup client that securely stores encrypted, incremental, compressed backups on cloud storage services and remote file servers.

Blog

Open Core Ventures Announces Duplicati Inc Launch

Open Core Ventures proudly announces the launch of Duplicati Inc.

News

Open Core Ventures Announces Duplicati Inc Launch

Open Core Ventures proudly announces the launch of Duplicati Inc.

News

Introducing the Duplicati Portal: Your New Hub for Cloud-Based Backup Monitoring and Management

We’re excited to announce the launch of a our first commercial feature designed to make it easier to setup and manage your Duplicati backups: the Duplicati Portal.

News

Introducing the Duplicati Portal: Your New Hub for Cloud-Based Backup Monitoring and Management

We’re excited to announce the launch of a our first commercial feature designed to make it easier to setup and manage your Duplicati backups: the Duplicati Portal.

News

Introducing the new Duplicati website

As part of launching the new company we are hitting many milestones and having a tremendous momentum.

News

Introducing the new Duplicati website

As part of launching the new company we are hitting many milestones and having a tremendous momentum.

News

Introducing Duplicati, Inc.

The project has grown quite a lot with millions of backups running each month and a vibrant user community that found a home with the introduction of the forum

News

Introducing Duplicati, Inc.

The project has grown quite a lot with millions of backups running each month and a vibrant user community that found a home with the introduction of the forum

News

Get started for free

Get started for free

Pick your own backend and store encrypted backups of your files anywhere online or offline. For MacOS, Windows and Linux.

Pick your own backend and store encrypted backups of your files anywhere online or offline. For MacOS, Windows and Linux.

Get started for free

  • Example image