How to upload 500GB photos from home server

Hi everyone!

In my previous post I’ve gotten some great help how regarding best practices to download my google photos to migrate to eOS. Unfortunately I was not able to follow the eOS guide since migrate has been disabled for now in eOS.

Thanx to the brilliant reply I now have 500gb of photo’s / videos from mt google drive stored on my server.

The next step is, what are best practices to now upload these photos to murena cloud? The guide does explain integration with nextcloud, syncthing and eDrive. These guide however discus how to sync the data when you already are using these services. I am using none of them.

The reason I’m being so precise with this is because I’m a little bit cautious because my photos are very precious and I don’t want to lose any photos. Also there are discussions that there was a server problem with murena cloud which caused a lot of problems with synchronization. This makes me want to be extra cautions.

What I’m thinking of doing now is just mounting murena drive in my fedora finder as a network drive and pasting my photos folder in the photos folder. This should start the upload process and also take care of any sudden disconnection problems since i’m not uploading using the website.

Are my assumptions correct that this is the best way to upload 500gb of google drive photos ?

If not please correct me.

Thank you in advance.

Kind regards

Regain your privacy! Adopt /e/OS the deGoogled mobile OS and online servicesphone

if you have a good connection that will probably work in nautilus. I think I’m looking at 2 gvfs webdav mounts? I’m mostly sure you will funnel the transfer through your node instead of the direct route. But it doesn’t have to matter.

rclone can do webdav and has a sync subcommand, that’s what I’d use for large webdav transfers that could need a retry and “sync” assurance - as you can do this server to server there’s no indirection - but you’d need to be able to ssh into that machine.

rclone details

Issue an app-password for the client. By the time you went through the “rclone config” wizard it will look like:

[mur]
type = webdav
url = https://murena.io/remote.php/webdav/
vendor = nextcloud
user = me@murena.io # (or @e.email)
pass = asdf-asdf-longhash
# this is not the plaintext password, but a hash created via
# echo -n "app-password" | rclone obscure -

example

rclone sync Documents/ mur:Documents/

as for where to sink 0.5T into, I’d choose a vendor with high availability

Thank you I will have a look at rclone!

This seem to be WebDAV mounts yes. At least, that’s what’s written in the address.

Hi tcecyk!

Thank you for your suggestion to use rclone! I will have a look and see if i can get it to work.

I do have one question, you are suggesting to use a vendor with high availability. are you peraps suggesting that murena is not that ? I know that there have been some outages in murena as the are openly stating. So i will definitley keep a backup myself. but murena should be up to this task i think ?

if you do not have much lag in syncing automatically from any vendor to secondary backup, availability doesn’t have to be a top concern. Most users do not have a second leg to stand on - and life tends to get in the way of even the best hobby admins

good point! thank you

sorry, updated the rclone config snippet - it lacked the domain in the user (some nextcloud instances require it) and how the hash is made from the app-password - the “rclone config” dialog takes care of that, only matters if it’s directly placed in ~/.config/rclone/rclone.conf - the config as described works for me with murenas Nextcloud

thank you for your reply!

this got it working from my laptop!

I’m currently also debating if i should buy a qnap tray and get some wd red pro or ironwolf in a raid5 setup to sync murena back to my server. But thats a costly thing to do. The murena data problem of last year did make me realize that it’s important to do this in one way or another, thank you for bringing that up.

I’ll say it bluntly, murena has not been very open about, why they never tested backup and restore processes or why they did not have any in place.

If you think about a home server, look into TrueNAS and ZFS.
ZFS it’s a COW system with build in Checksumming and snapshots, developed to provided bullet proof longterm storage.
I can promot a 6TB snapshot backup as restore in ~60 seconds and as per design snapshots are always consistent.
Well worth to dive into, I you, like me cherish you images!