Hi @voyager529,
You can, but please be aware that communication between nginx and libreY will be unencrypted (not really sure if this a concern for a search engine, anyway).
That being said, it looks to me as an overly complicated setup, having the data going back and forth between your client, the Murena Cloud reverse proxy, and your search instance.
Of course it will work, and using Murena Cloud nginx will spare you from reverse proxy and SSL headaches. But you may also want to start with a simple HTTP setup @home, skipping the nginx reverse proxy for a start.
(note: this is a setup for having the search engine at your Murena Cloud server, it won’t be needed if you install libreY on another server @home).
Murena Cloud upgrades never implied a complete overwrite of docker-compose.yml, so far (and I bet my pants that upgrade for NC 29 will stay this way, hopefully we’ll see that in a near future ). Please see here as an example: https://gitlab.e.foundation/e/infra/ecloud-selfhosting/-/blob/master/upgrade-guides/upgrade-to-26.0.8.23.md.
Personalty, I’ll go for a direct edit after a backup copy (you can always backup/diff/copy/paste the lines to a newer file, when upgrading).
Anyway it will work the way you describe, just pay attention to the number of spaces as beginning of lines, please.
Please read here: |HowTo] Add a webapp as a sideload of /e/ self-hosted cloud.
The differences for an external container @home should be (untested):
- keep the port exposure: in this case we need access to the Docker container from outside of Docker. You can use anything you want, above 1024. Let’s say 8887
– there’s no way you can proxy from one server to another, without exposing the target port to the Internet (unless you have some kind of VPN between these servers) - you will have to pinhole or NAT this port in your home firewall, let’s say we’ll use 12345 as external port to keep everything clear here
Of course you can also restrict source IP with your Murena Cloud’s
- now, you will need 2 DNS entries: one for the public entry point as HTTPS (let’s say search.voyager529.com) and one for your private libreY instance as HTTP over 12345 (let’s say search-engine.voyager529.com)
- then if you look at
nginx
service in your Murena Clouddocker-compose.yml
, you fill find:
ports:
- "80:8000"
- "443:4430"
Knowing that, in nginx config files we’ll have to use 8080 for external HTTP/80, and 4430 for external HTTPS/443.
- in my example (link above), you can find 2
server
directives:
– you should focus onserver_name
(the DNS name that nginx will know and reply for), andproxy_pass
(files are not served locally, but by a proxied resource)
– first directive is to redirect HTTP to HTTPS, you only have to changeserver_name
to your public DNS (search.voyager529.com)
– second directive is for regular HTTPS data, here changeserver_name
to your public DNS (search.voyager529.com), andproxy_pass
to your libreY instance as known from Internet (http://search-engine.voyager529.com:12345) (please note thehttp://
)
– in a “one in a box” setup Docker will implement an internal IPAM, and will provide some kind of low-level DNS resolution to containers; that’s why we usually use containers names to communicate between, for example in nginx conf files. In this case, we can’t - you can delete the
client_max_body_size
thing, it won’t be of use for you
As for the option with having the search instance at your Murena Cloud server (please also read above for technical details):
- the (in)famous line 7 can be deleted: we won’t expose the service outside Docker
– ports are managed for each Docker container, you can have multiple containers (even sharing the same source image) listening on the same port (couple “container IP address + port” is the criteria here) server_name
will still need to be your public DNS name (search.voyager529.com)proxy_pass
should be set up to your libreY container “protocol+name+port” (http://librey:8080
)
– I don’t recommend using static IP addresses here: container’s are managed by Docker! In Docker IPAM we trust- you can also delete the
client_max_body_size
thing - if your nginx complains about not finding the “librey” DNS name, you may have to play with “depends_on” Docker Compose directive to have librey started before nginx (most likely: add “librey” to “depends_on” nginx’s list)
– as I could experience, each container maintain it’s own IPAM image acquired at startup, and Docker IPAM is populated only when a container starts. There is probably an option to refresh all containers IPAM image when starting a supplemental container, but I’m too lazy to find out…
- and 2. as for
docker-compose.yml
you can also directly edit this file, it’s very unlikely to be overwritten (in fact, most often we forget to clean it up) - to 6. it’s OK
(extra step) Bravo !
Hopefully I didn’t forget something… Please ask if unsure!
And please remember: one of the major benefits of Docker is the easy cleanup, contrary to traditional hosted installs