[HOWTO] Self-hosting feature thought: integration of LibreX into the /e/Cloud setup

So, no hard feelings toward the Murena Search frontend…but I’ve had some challenges with it as of late. The “Fail Whale” (i.e. the satellite that shows all the different search engines as denying the request) has come up more often, and the results sometimes aren’t great (i.e. Bing or DDG results for the same query show relevant results).

My thought, naturally, was that since I’ve got a self-hosted /e/Cloud server, that perhaps it’d be possible to add in LibreX as a way to do searching from my instance instead of Murena’s shard-by-everyone instance.

@smu44 , is it worth trying to integrate this into my /e/Cloud instance, or should I make that a separate Docker container in my environment and just add a Reverse Proxy config and a cert? Also, is there known documentation for making a custom website the integrated search engine within the /e/OS browser?

Thanks!

Regain your privacy! Adopt /e/OS the deGoogled mobile OS and online servicesphone

apparrently

LibreX is now deprecated, if you come from the hnhx/LibreX repository and want to try LibreX, you should use LibreY

1 Like

I’d definitely go for Docker! There is an example file provided at https://github.com/Ahwxorg/LibreY/blob/main/docker-compose.yml.

My 2 cents:

  • line #5 is to be deleted (your default network should be in bridge mode already)
  • lines #6 and #7 are to be deleted: as you will use the nginx reverse proxy you won’t need this port to be exposed outside
  • line #26 should be re-enabled if they are any files in /var/log/nginx inside the container (docker-compose exec librey ls -al /var/log/nginx)
  • lines #26 and #27: local path has to be changed to something like /mnt/repo-base/volumes/... (for example, /mnt/repo-base/volumes/librey/php_logs). Don’t forget to create these dirs, and change permissions & owner if needed
  • lines #29 to the end: watchover is a good idea, but we don’t want it to apply to our Murena Cloud containers. From https://containrrr.dev/watchtower/container-selection/ I’d add:
    labels:
      - "com.centurylinklabs.watchtower.enable=true"

to the “librey” Docker Compose service

    environment:
      - WATCHTOWER_LABEL_ENABLE=true

to the "watchover’ Docker Compose service


Adding a search engine to /e/OS Browser is quite simple: use your search engine at least 10 times, then you should be able to add it. Please read here: Add Startpage as search engine.

I’d definitely go for Docker!

Agreed! My question, I guess, could be better explained…

In my environment, I’ve got two options:
1.) add the LibreY container to the /e/Cloud environment directly, or
2.) add the LibreY container to a different, independent Docker host, in which case my only changes to the /e/Cloud environment would be adding the nginx config entry and the certificate.

Now, while I think the #2 option is easier (honestly, it’s pretty generic, since I’ve already got a second VM with a Docker instance for KitchenOwl, I figured it’d be worth exploring what it would take to integrate LibreY into the build, because, as we’ve already estabished, I’m the one weirdo who puts my data on my own server, rather than a VPS :wink:. If I can turn this thread into a tutorial for other users, I think there is value to be had in that.

So, if I understand the procedure correctly, it would be as follows (stay with me):

  1. ssh into my /e/Cloud VM and docker-compose stop && nano /mnt/repo-base/librey.txt
  2. enter the contents of the linked docker-compose file, along with the recommended changes from your list (I’ll probably opt out of watchtower for the time being).
  3. cat /mnt/repo-base/librey.txt >> /mnt/repo-base/docker-compose.yml && docker-compose up -d

Do I have that right? The logic regarding making the librey entry a separate text file and then concatenating it is that when the /e/Cloud update comes (no, I’m not looking forward to Nextcloud 29, why do you ask? =) ), I’m assuming that the upgrade will overwrite my docker-compose.yml file, so the known-working config can be re-added after an upgrade by running the append command again…that’s my logic, anyway…I promise I won’t be offended if you tell me I’m way off base here and provide some corrections =).

Now, there are two related parts here I’m not completely clear on. First, I’d assume that I’d need to add a subdomain, say ‘search.voyager529.com’ that would point to the librey instance. I’d assume that I’d need to make a search.voyager529.com.conf file and put it in /mnt/repo-base/config/nginx/sites-enabled, as I’ve done with my Vaultwarden instance. Now, that’s something I’ve done before, but you said that I didn’t need to expose the port since I’m using the internal nginx instance. Where I’m a bit confused is that I’d assume that there’s a need to tell nginx to direct traffic to the separate container. The rspamd.conf file uses a proxy pass and port 11334, and autodiscover gets the port 80 traffic with its proxy_pass entry, so my initial reaction was to make line 7 - 8887:8080 to avoid the possibility of a conflict (8080 seems to be a pretty commonly used port for docker containers), then make the proxy_pass line either http://librey:8887 (if the service name on line 2 can be used in this way) or http://192.168.1.2:8887 (if it can’t). Your statement about removing those lines leads me to believe that my understanding is incorrect; could I impose upon you to help me understand how to get nginx to point to the container without exposing the ports?

Second, something I haven’t done before is to use certbot to get a Let’s Encrypt cert, and this seems like as good a time as any to attempt that. Having read through the installer scripts, here’s what I think the procedures are; please tell me what I’ve got wrong here (again, same logic - separate files for custom entries to allow simple concatenation commands after updates):

  1. echo 'search.voyager529.com' >> /etc/repo-base/config/letsencrypt/autorenew/custom_domains.txt
  2. cat /etc/repo-base/config/letsencrypt/autorenew/custom_domains.txt >> /etc/repo-base/config/letsencrypt/autorenew/ssl-domains.dat
  3. in the nginx config file I make, set the ssl_certificate line to /certs/live/search.voyager529.com/fullchain.pem and a similar path to the .key file.
  4. stop the nginx container.
  5. run ./mnt/repo-base/scripts/ssl-renew.sh
  6. check /certs/live folder that the cert and key were created. Assuming they were, start the nginx container again.

So, I’m hoping that, with a few corrections, I can give this a shot =).

Thank you, Sylvain! And thank you @tcecyk for letting me know about the fork!

Hi @voyager529,

You can, but please be aware that communication between nginx and libreY will be unencrypted (not really sure if this a concern for a search engine, anyway).

That being said, it looks to me as an overly complicated setup, having the data going back and forth between your client, the Murena Cloud reverse proxy, and your search instance.
Of course it will work, and using Murena Cloud nginx will spare you from reverse proxy and SSL headaches. But you may also want to start with a simple HTTP setup @home, skipping the nginx reverse proxy for a start.

(note: this is a setup for having the search engine at your Murena Cloud server, it won’t be needed if you install libreY on another server @home).

Murena Cloud upgrades never implied a complete overwrite of docker-compose.yml, so far (and I bet my pants that upgrade for NC 29 will stay this way, hopefully we’ll see that in a near future :clown_face:). Please see here as an example: https://gitlab.e.foundation/e/infra/ecloud-selfhosting/-/blob/master/upgrade-guides/upgrade-to-26.0.8.23.md.
Personalty, I’ll go for a direct edit after a backup copy (you can always backup/diff/copy/paste the lines to a newer file, when upgrading).
Anyway it will work the way you describe, just pay attention to the number of spaces as beginning of lines, please.

Please read here: |HowTo] Add a webapp as a sideload of /e/ self-hosted cloud.

The differences for an external container @home should be (untested):

  • keep the port exposure: in this case we need access to the Docker container from outside of Docker. You can use anything you want, above 1024. Let’s say 8887
    – there’s no way you can proxy from one server to another, without exposing the target port to the Internet (unless you have some kind of VPN between these servers)
  • you will have to pinhole or NAT this port in your home firewall, let’s say we’ll use 12345 as external port to keep everything clear here :wink: Of course you can also restrict source IP with your Murena Cloud’s
  • now, you will need 2 DNS entries: one for the public entry point as HTTPS (let’s say search.voyager529.com) and one for your private libreY instance as HTTP over 12345 (let’s say search-engine.voyager529.com)
  • then if you look at nginx service in your Murena Cloud docker-compose.yml, you fill find:
    ports:
      - "80:8000"
      - "443:4430"

Knowing that, in nginx config files we’ll have to use 8080 for external HTTP/80, and 4430 for external HTTPS/443.

  • in my example (link above), you can find 2 server directives:
    – you should focus on server_name (the DNS name that nginx will know and reply for), and proxy_pass (files are not served locally, but by a proxied resource)
    – first directive is to redirect HTTP to HTTPS, you only have to change server_name to your public DNS (search.voyager529.com)
    – second directive is for regular HTTPS data, here change server_name to your public DNS (search.voyager529.com), and proxy_pass to your libreY instance as known from Internet (http://search-engine.voyager529.com:12345) (please note the http://)
    – in a “one in a box” setup Docker will implement an internal IPAM, and will provide some kind of low-level DNS resolution to containers; that’s why we usually use containers names to communicate between, for example in nginx conf files. In this case, we can’t
  • you can delete the client_max_body_size thing, it won’t be of use for you

As for the option with having the search instance at your Murena Cloud server (please also read above for technical details):

  • the (in)famous line 7 can be deleted: we won’t expose the service outside Docker
    – ports are managed for each Docker container, you can have multiple containers (even sharing the same source image) listening on the same port (couple “container IP address + port” is the criteria here)
  • server_name will still need to be your public DNS name (search.voyager529.com)
  • proxy_pass should be set up to your libreY container “protocol+name+port” (http://librey:8080)
    – I don’t recommend using static IP addresses here: container’s are managed by Docker! In Docker IPAM we trust :laughing:
  • you can also delete the client_max_body_size thing
  • if your nginx complains about not finding the “librey” DNS name, you may have to play with “depends_on” Docker Compose directive to have librey started before nginx (most likely: add “librey” to “depends_on” nginx’s list)
    – as I could experience, each container maintain it’s own IPAM image acquired at startup, and Docker IPAM is populated only when a container starts. There is probably an option to refresh all containers IPAM image when starting a supplemental container, but I’m too lazy to find out…
  1. and 2. as for docker-compose.yml you can also directly edit this file, it’s very unlikely to be overwritten (in fact, most often we forget to clean it up) :wink:
  2. to 6. it’s OK :smiley_cat:

(extra step) Bravo !

Hopefully I didn’t forget something… Please ask if unsure!
And please remember: one of the major benefits of Docker is the easy cleanup, contrary to traditional hosted installs :smiley_cat:

Huzzah, Success!!

The distilled set of steps for the next person (and, as a recommendation to the next person, go back and read what Sylvain wrote, it’s worth the read):

  1. Create an A record subdomain in your registrar; point it to the same WAN IP as the rest of the A records to your server.
  2. ssh into your server, and get the cert in place:
    1. nano /mnt/repo-base/config/letsencrypt/autorenew/ssl-domains.dat
    2. add your domain to the bottom of the list.
    3. docker stop nginx
    4. cd /mnt/repo-base-scripts && ./ssl-renew.sh
    5. docker start nginx
  3. Add LibreY to the docker compose file:
    1. nano /mnt/repo-base/docker-compose.yml
    2. paste the docker-compose file from below; there are no passwords or example configs so you can use it as-is unless you want to make changes to the environment variables.
    3. Create the folder for the php files: mkdir /mnt/repo-base/volumes/librey && mkdir /mnt/repo-base/volumes/librey/php_logs
    4. set permissions for the php_logs folder. I was super lazy and made it 777, but I’m assuming that more restrictive permissions will work.
  4. Add the nginx config:
    1. nano /mnt/repo-base/config/nginx/sites-enabled/search.voyager529.com.conf
    2. paste the nginx config file from below.
    3. do a find/replace for REPLACE_THIS and replace it with the subdomain you made in step 1 (you can also use the sed command for this; again, I’m lazy).
  5. Implement the config:
    1. docker-compose down
    2. docker-compose up -d
docker-compose file
    image: ghcr.io/ahwxorg/librey:latest
    container_name: librey
    environment:
      - CONFIG_GOOGLE_DOMAIN=com
      - CONFIG_LANGUAGE=en
      - CONFIG_NUMBER_OF_RESULTS=10
      - CONFIG_INVIDIOUS_INSTANCE=https://yt.ahwx.org
      - CONFIG_DISABLE_BITTORRENT_SEARCH=false
      - CONFIG_HIDDEN_SERVICE_SEARCH=false
      - CONFIG_INSTANCE_FALLBACK=true
      - CONFIG_RATE_LIMIT_COOLDOWN=25
      - CONFIG_CACHE_TIME=20
      - CONFIG_DISABLE_API=false
      - CONFIG_TEXT_SEARCH_ENGINE=auto
      - CURLOPT_PROXY_ENABLED=false
      - CURLOPT_PROXY=192.0.2.53:8388
      - CURLOPT_PROXYTYPE=CURLPROXY_HTTP
      - CURLOPT_USERAGENT=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:116.0) Gecko/20100101 Firefox/116.0
      - CURLOPT_FOLLOWLOCATION=true
    volumes:
      # - ./nginx_logs:/var/log/nginx # Disabled by default. These are the NGINX request logs.
      - ./php_logs:/mnt/repo-base/volumes/librey/php_logs # Enabled by default. These are the PHP error logs.
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
nginx config file

server {
listen 8000;
server_name REPLACE_THIS;
location / {
return 301 https://$host$request_uri;
}
}

server {
listen 4430 ssl http2;
server_name REPLACE_THIS;

ssl_certificate /certs/live/REPLACE_THIS/fullchain.pem;
ssl_certificate_key /certs/live/REPLACE_THIS/privkey.pem;

include /etc/nginx/params/ssl_params;
include /etc/nginx/params/headers_params;

location / {
proxy_pass http://librey:8080;
include /etc/nginx/params/proxy_params;
}
}

And…that’s it!!

Bonus: I only needed to do a single search for the /e/OS browser to let me set it as a default.

As always, thanks so much to @smu44 for all of his guidance.

One more thing, more of a ‘public-private message’ to Sylvain: my particular environment uses a set of VMs on a single subnet. My KitchenOwl instance is on a separate VM from my /e/Cloud instance, but they’re on the same LAN. I’ve done similar work in AWS (old habits die hard, I guess), hence the original thought of adding LibreY to the second VM.

While certainly debatable, there are a few reasons for this topology. First, I’m not big on having several applications share a single database. Since database instances are very small, they’re easy to separate out, largely so that a failure of one database leaves other applications functioning. The second reason for this is to limit resource usage; I’m happy to give a huge amount of disk space to my /e/Cloud instance so it can keep a complete backup of my photos, but I can give my other containers, like KitchenOwl, a very small amount of storage space and put it on a smaller SSD volume instead of the 4TB HDD volume that /e/Cloud enjoys. I understand this makes less sense on a VPS, but, well, I don’t run a VPS :stuck_out_tongue: .

Thank you again!

1 Like

Thank you @voyager529 for this detailed guide! Congratulations, you did it! :smiley_cat:
It’s worth adding “[HOW-TO]” as the beginning of the subject title, editing your first post. If you can’t @Manoj would be able to do it for us, for sure (thanks!).

One little thing I may change:

I’d prefer to use docker-compose stop: with down all containers will be dropped, then re-created with up. Using stop then up will only (re)create what’s necessary (new services, updated images, etc).
One noticeable benefit, is logs being kept unless the container is re-created.

Sorry, I missed your particular set-up when writing my previous message :confused:
I think that my proposal for “external container” will still apply, skipping the firewall+NAT part and replacing search-engine.voyager529.com with the target container (preferably with an internal DNS name but can also, of course, work with IP address) and using the container’s exposed port (8887).

Your set-up with multiple VM & databases totally makes sense to me, I may come up with a similar solution.
However, gaining experience while aging, I now prefer to reduce the number of VM to ease maintenance, backups, etc, as service providers do with mutualized offers. Docker is of great help here.
But, when it comes to personal set-up, it’s only a matter of personal choice! :smiley_cat:
Please note that you can run multiple database containers in the same Docker, using the same image or not. You just have to provide a different service name, and of course different volumes & variables. Using the same image will optimize your storage :wink:

Your set-up with multiple VM & databases totally makes sense to me, I may come up with a similar solution.
However, gaining experience while aging, I now prefer to reduce the number of VM to ease maintenance, backups, etc, as service providers do with mutualized offers. Docker is of great help here.
But, when it comes to personal set-up, it’s only a matter of personal choice! :smiley_cat:

I think there’s a bit of a pendulum swing that happens over time when handling environments, swinging between ‘centralization’ and ‘decentralization’, since the pros and cons of each are almost opposite.
While not a good idea for an /e/Cloud instance, what I’d probably end up doing in my homelab would be to make a single VM that just hosted a MariaDB instance, and then have all the Docker containers and KVM containers (this guy is the ‘you’ of Proxmox containers :slight_smile: ) point to the database VM. Don’t get me wrong, I’ve run mariaDB as a Docker container, and it’s worked just fine, but I’ve always been terrified whenever I’ve updated the container, because it would bring down half my applications if I did. Moreover, I couldn’t use VMWare/Proxmox snapshots to help mitigate issues, because reverting would mess up the database entries of the other applications if I reverted the snapshot, or do other weird things (e.g. photos not showing up in nextcloud because the files are in the NFS share where nextcloud data goes, but the database entries referencing them are gone due to the revert).

Again, 101 ways to do all of this, and I’m sure I’ll find another add-on for my /e/Cloud server that’ll warrant a how-to article, where I learn more in the process =).

Thank you again, Sylvain!

I totally agree and share your concerns when it comes to MariaDB data safety.

That’s why I included a MariaDB SQL backup solution with self-hosted Murena NC23 release, based on [HOWTO] Properly backup self-hosted /e/ cloud databases.
It’s not activated by default, owner’s choice!
Backups are simple dumps to make restore easy, they are plenty of guides around (for example, here or here).

I made it modular, so it’s easy to add a supplemental “customer” database, just by duplicating one of the provided script and SystemD configuration files.
Also, it shouldn’t take much work to adapt for another MariaDB (not Murena Cloud) instance.
Please ask (privately if you prefer) if you need help for these :smiley_cat:

As an alternative, any snapshot method for a DB engine should be taken with DB engine off (ie. stopping the Docker container). My choice for self-hosted Murena Cloud goes to shut down the entire VM, to ensure consistency.

Sadly, I couldn’t find any easy solution about downtimes when upgrading the MariaDB engine :confused:

This topic was automatically closed after 90 days. New replies are no longer allowed.