• Dhayes
    1
    Hello.
    So we are testing Wasabi and were thinking the cloud storage with Wasabi would be a great deal (and it is for the most part). But with the billing for 90 days even for deleted files I can see where this could be unexpected costs if you have weekly or bi weekly fulls. I view the cloud as emergency storage in the event of massive failure so I will only have 7 restore points in the cloud with a weekly full perhaps. And assuming the weekly full is 500gb even though it is deleted weekly for the next full I will end up paying for 6tb of storage (12 weeks of 500gb fulls) when I am actually only using 500gb. Obviously this is not good and it does not seem like there is an option to change this on the Wasabi front.

    Soo. What would be the best practice in this scenario to optimize cloud storage using CB and Wasabi when you really only want 7 restore points in the cloud?

    Also how do synthetic fulls work here? My assumption would be that since the file name is not changed or deleted this will not hold true? I suspect it holds true for the incrementals that are merged during the synthetic full process but those would be small.

    Is the assumption with Wasabi that people will need longer than 90 days of cloud storage and thus this will not be much of an issue? As I said we will be keeping all the long term storage on our local device (hopefully via the coming GFS method) and the cloud will only have 7 days.

    I guess I was looking for the gotcha with Wasabi. I am hoping this is not it.

    Thanks
  • Tyson Nielsen
    0
    I'd suggest looking at Backblaze b2 storage, they don't have the 90-day billing for files. You pay for what you use and that's it.
  • David Gugick
    118
    Agree with Tyson. If you have a lot of volatility in backup files in the first 90 Days, then using archive style storage is not the best option as there is normally a minimum retention period - in exchange for very low Per GB cost. Instead, using a service that either provides storage tiering so you can automatically move long term storage from a hot tier to a lower cost archive storage tier (we support that with AWS and Azure) or using a lower-cost cloud storage vendor for your hot data is probably best.
  • Dhayes
    1
    Great info...Thanks very much. I was really hoping to use Wasabi but the advantages of CB is the agnostic nature of the software. I am going to check out Azure. Perhaps it will work better long term anyway for DR testing and such.
    Thanks again
  • Dhayes
    1
    I was looking at BackBlaze as well but I am just concerned about their recovery download speeds.
  • David Gugick
    118
    If you like the cloud, but need fast restores at times, then using a Hybrid backup may be the solution as you'll have local backups in addition to cloud for fast recovery and disaster recovery.
  • Dhayes
    1
    Thanks David...Much appreciated. Actually, I am using the Hybrid option on all our tests. Works wonderfully!. However, most of our servers out there are Hyper-V and we are using the VM version of the software. And we were really hoping to use Synthetic Fulls which is supported by Wasabi. But VM version does not support that yet. But if we rule out Wasabi because of the 90 day thing then it may not be a big point (but it is soo cool).

    Another thing is that we would love separate retention options for local and cloud. For instance, locally we want to have very long retention using a variation of GFS but in the cloud we only want 7 days retention and that is not supported yet. However, I read somewhere where version 6.0 will support this option which would be cool

    Thanks again
  • David Gugick
    118
    Separate retention is coming in 6.0, I believe. GFS retention is also in the works for release later this year.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment