Comments

  • I have a problem and I do not know how to solve it, can someone help me?
    What's the ws. In the URL? Is that something you typed or were you directed to that page? Remove it and see if all is resolved.
  • How to check which files are uploaded on incremental backup
    It's possible this is an artifact of archive mode. I'd ask you to open a support case so the team can investigate further.
  • How to check which files are uploaded on incremental backup
    Is this a file or image backup? If image, then there are no files backed up; just disk blocks.
  • How to check which files are uploaded on incremental backup
    Did you try clicking those options to see if they help? Could be a bug but try clicking in the interim.
  • How to check which files are uploaded on incremental backup
    Looks like nothing was updated in the last run - try clicking the Last Day or Last Week options and see if the list of files backed up and purged are listed. You can also see a summary of the last run from the Backup Plans tab for the plan in question in the item called Files Uploaded. Let me know if that helps.
  • SQL best practices
    You can exclude the folders with the database and log files if you do not want to back up those files in the image if it's a concern of size.
  • SQL best practices
    First thing is do not run separate backups for local and cloud. If you do, and you're using differential backups or taking transaction log backups, you may end up with restore issues. Second, SQL Server will compress the database backup if you enable that option (and you're not using an edition of SQL Server that does not support compressed backups). If the full is really 82 GB, that should not be a problem even for slower broadband connections. You did not post the upstream speed, but unless you're saddled with DSL speeds, you should be fine running the full backups at night when there is no other activity. Third, consider performing Hybrid backups from CloudBerry so you have local and cloud backups that are the same and are performed in a simple pass (like SolarWinds). Lastly, you can reduce your full backup frequency if your differential backups remain small when compared to your full backup size. Consider reducing the full backup frequency to every 2 or 4 weeks with differentials on other nights.

    You did not mention transaction log backups. It's possible the database is running in Simple Recovery and they are not needed or the software that uses SQL Server does not support their use. But if not, I would consider using them and running the t-log backups throughout the day based on your customer's restore point needs. In other words, how much data are they prepared to lose if the database needs to be restored. If it's an hour, then run those t-log backups every hour.
  • ERROR: disk I/O error, backups not running
    I'd encourage you to continue working with Support on this. They should be able to figure things out from the logs and a remote session.
  • ERROR: disk I/O error, backups not running
    Could you be out (or very low) on free space on the target disk? Could you also provide details on this virtual disk like the software being used?
  • Quick Support Client Bug - Screen Sharing is paused by Remote Computer
    The engineering team is doing some additional research and we'll reply back here with any updates.
  • malicious and accidental deletions
    If a malicious hacker deliberately destroyed data, they would have to know about CloudBerry and where the backups were stored. Makes me think this was someone known. You can secure the agent with a master password - that feature is used to prevent end-users from using the agent. This feature can be automated from the console using the RMM - Remote Deploy - Create Configuration option. Create configurations for Windows and Mac/Linux and enable the Protect Console with Master Password option. Then create a rule for each config that deploys the settings to all customers (or a subset as needed).

    The rest, I think, is security related like not storing passwords in web browsers to prevent someone from logging into the cloud storage vendor via a web browser, disabling credentials as a part of the process when employees / consultants leave, changing any shared passwords if you use them, etc.

    You can also enable IP Address White Lists in the console to prevent unauthorized access via the web console from unknown IP Addresses.

    And lastly, notify the authorities. If it was someone close, they can probably find out who.
  • Quick Support Client Bug - Screen Sharing is paused by Remote Computer
    I did a quick check in the system and UAC prompts are expected behavior with the Quick Support Client. I think the solution for you is to install the full client as I don't think there is a way to avoid UAC approval when running Quick Support. Thanks.
  • Quick Support Client Bug - Screen Sharing is paused by Remote Computer
    Would it be possible to have the full client installed on these residential computers - in anticipation of them needing remote assistance in the future? Either way, I'll add the feature request to the system on your behalf and we'll respond here with updates. Thanks for taking the time to provide some additional details.
  • Quick Support Client Bug - Screen Sharing is paused by Remote Computer
    Could you describe your use case in a little more detail? I'm asking because most of our customers use the Quick Support Client with a user at the other end - who can easily approve the UAC dialog. It sounds like your use case is different, and I'd be interested in hearing more about it. Thanks!
  • Real-time backup use case
    Eric, if scanning is an issue, the logs will help identify as such. Are you using the Fast NTFS Scan option? If not, you can try it. Sending the logs is a 1 minute process from the product - Tools - Diagnostic - please send them for review.
  • Verification Code?
    That's our Two-Factor Authentication. Either you or your Admin enabled 2FA on your account. If you're not the main admin, then you need to reach out to that person. If you are, then run Google Authenticator as that is the app we use for 2FA and there's probably a code waiting for you.
  • Possible Mapped Drive Connection Timeout
    Great. We appreciate the update and please let us know if anything else comes up.
  • Excluding folders and files after an initial backup
    If you know the folders that need to be purged, then you can deselect them from the backup and manually remove them from backup storage (at the folder level). But depending on how many folders and / or files you're dealing with, you might be better off creating a new plan and waiting for the original backup data to age out before manually removing. I think it would be difficult to adjust retention settings for the original backup to remove the files no longer needed and would probably be too much work to move those files/folders in question to a new location that is not a part of the backup - in that case, if you were removing locally deleted files in retention settings, the files would eventually be removed from backup storage. If you have any ideas on a better process, please reply here and we'll discuss internally.
  • Bare Metal Recovery from network via USB
    You can specify the folder where additional drivers are located (in INF format) during the USB recovery disk build process. I think if you're having trouble with this step, it's best to open a support case for guidance. There is some information here: https://help.cloudberrylab.com/cloudberry-backup/backup/create-a-recovery-disk-flash-drive and here: https://www.cloudberrylab.com/resources/blog/how-to-create-a-bootable-usb-for-windows-server/ and here: https://help.cloudberrylab.com/cloudberry-backup/restore/bare-metal-recovery
  • Bare Metal Recovery from network via USB
    Which Cloud? Glacier, by chance? If so, did you use a lifecycle policy to move the data there or did you back up directly to Glacier?

David Gugick

Start FollowingSend a Message