Comments

  • Changing Drive Letters of Data
    Are you using the new Backup Format for your Cloud backups? If yes, then I am afraid that you cannot change the drive letter like you can with the Legacy format. I do not know if using the New Backup format will make the reupload go faster (deduplication impact). Perhaps someone else has experience with this issue.
  • David Gugick
    David is no longer with MSP360
  • An error occurred (code: 1003) on several servers since upgrade to 7.9.4.83
    Others have reported this issue to tech support - What do you use for clous storage?
  • Which backup schedule to choose?
    Not sure what your retention period is, and whether you have a requirement for using GFS.
    But assuming you don’t need GFS, if you want to keep say 90 days of backups, you would need to start backing up using legacy format going forward and keep the existing new format .bak files for 90 days until you can safely delete them from cloud storage. I use Cloudberry Explorer to delete the files that are no longer needed.
    I assume that you are not using the MSP360 SQL backup, so it is a little trickier to keep only 90 days of .bak files since as you stated, they are all unique files and would never get purged. There is a way to do it and I would be happy to explain how to set it up if you want.
    Just to clarify, you can’t change exisiting backups from one format to another.
    If you still have the .bak files from the past x days on primary storage, you could back them up again using legacy format and then you could delete the NBF .bak files right away.
  • Which backup schedule to choose?
    If you deleted all the legacy format data from the cloud, then you’d have to re-upload everything, but it may be worth it. Question: are you using SQL license from MSP360 or are you just backing up the .Bak files?
  • Which backup schedule to choose?
    The new format is not really suited for file backups such as you describe. IMHO, you should stick with the legacy format for SQL backup files, and any other file based cloud backups. We use the new format for Image/HYPERV VM backups which have a short retention perod (1-2 weeks), as it allows us to do synthetic fulls which reduces the full backup runtime by 75-80%
  • Can i perform file based backup for virtual machine.
    Can you show me your backup plans settings?
  • Backup plan configuration
    If you go to the MBS portal Computers page then select the client/computers using the search and checkboxes, then select "Plan Settings Report" from the menu.
    It will prompt for an email address and will send a link to get the .csv settings report.
    It shows the following:
    • Company
    • User
    • Login
    • Computer
    • Profile
    • Plan Name
    • Backup Format Type
    • Storage Destination
    • Source Folders
    • Excluded files
    • Encryption (Yes/No)
    • Compression (Yes/No)
    • Advanced Settings
    • Retention Policy
    • Notification (Yes/No)
    • Schedule Full Backup Schedule
    It is not the prettiest documentation, but gets most of what you need.
  • Backups not working, bug in 7.9.4.83 gives 1003 error
    What is your cloud storage location?
  • Error 1003: Unable to write data to the transport connection
    We experienced a rash of 1003 errors with Backblaze a few months back.
    I created a new account with an East coast Data Center and have not had the problem since.
  • Deleting Orphaned Data
    I don't use the MSP360 data deletion option, rather I use Cloudberry Explorer to delete orphaned data , then run a repo sync to get things right (if necessary).
    If you go to users and click on the green icon it shows you the MBS prefix for the client. I then go into CB explorer and delete the data directly from storage. It can take a while so you ned to leave CB explorer open, but at least I know what is getting deleted. I then run a repo sync if the machine in question is still active, otherwise there is nothing else to do once the data is deleted from the backend storage.
  • Move data between Backup storage devices
    The data will not be moved automatically. You have a couple of options:
    1. Copy/move the backup folder from the old NAS to the new one, then create a new local storage account for the new NAS and run a repository sync. You can then pick up where you left off.
    2. If you want to keep the old backups on the original NAS and only have newer files get backed up to the new location, then create a local storage account for the new location and set the Advanced filter to "back up objects modified since xx/xx/xxxx date". It will only backup files created /modified after that date.

    If you use option 2 and have to do a complete restore, you could restore the more recent files first then do a second restore for the older stuff.
  • Optimum S3/Cloudberry config for desktop data
    For new clients, we now do nightly cloud file backups to Wasabi and BackBlaze B2. It works great.
  • What MSP360 Windows backup strategy do you use for largely static collections?
    I may be atypical, but for this type of data set, I would recommend just using the legacy file format, AND sending a copy to one of the low-cost Storage providers (BackBlaze or Wasabi).
    As Alex points out, if you use the New Backup Format, there will always be two FULL copies of the data consuming space at any given point in time. For local backups that may not matter to you, but for cloud storage it makes a significant difference.
    What is your reason for switching this existing set of data to the New backup Format?
  • Full backup from time to time
    What kind of backups are you referring to? File or Image/VM?
  • Why is Cloudberry Backup For Windows Server So Slow?? (Backblaze B2)
    You say that you are restoring individual files from an image backup? We do file backups using legacy format and Image backups using NBF (to take advantage of Synthetic fulls in BackBlaze).We keep one full and daily incrementals of the Image backup and for files we keep 90 days worth of versions using legacy format. Have never had an issue with file download speed. If we need an entire system we restore the Image which includes al of the data - that runs at near-ISP max d/l speed.
  • Forever Forward Backup “Times Out”
    Don't use FFI for anything. But using the new backup format with weekly synthetic fulls (weekend) and daily incrementals works well for us. The weekday incrementals take only a very short time.
  • Error code: 1003
    What aere you using for backend Cloud storage? BackBlaze?
  • Browsing Cloudberry Backups with Explorer PRO
    The New backup Format combines files into an archive file using a different format than the legacy file backup format. For that, and other reasons, we do not use the New backup format for data/file backups; only for Image and HYPERV VHDx file backups.