Self-hosting - my journey

Self-hosting - my journey - Part 1 of 3: Setting up a self-hosting system

I get the impression that more people, especially here in the /e/-community, try to reclaim their privacy. To me, privacy means having control of a place where I store my private data. It has been more than five years since I started to self-host my own private data and I believe sharing my experiences, struggles and thoughts during this process with this community might be beneficial.

The process for self-hosting can take different ways with different levels of difficulty - from a “do it all yourself” approach to “buy a product that does the work”. This text reflects my experiences for the “do it yourself” approach. This also means, that this text will describe technical aspects in some places. However, I try to mention alternatives and shortcuts, which I came into contact with.

The target audience is people who want to start self-hosting and would like to get some pointers where to begin. A text like this can not cover all the detailed work that is necessary to setup such a self-hosted system, but I believe that it can give a general guideline to help making one step after the other.
Also, such a journey entails setbacks, getting stuck and all kinds of demotivating situations, so be prepared to read manuals, dig through tech-forums, get used to searching for error messages or even creating bug reports for software, you want to use. Maybe there come times when it is healthy to let the system stand for a week and clear ones head.

This first part is about setup options and getting a basic system running.
The second part will cover software packages used for the required services.
The third and final part is about additional related topics and concludes this series.

Motivation

Looking back, my initial main motivation for setting up a self-hosting system was laziness.

It all started when I was about to purchase my first smartphone and thought about the process of transferring my contact data to the new phone. Entering all the data seemed like a tedious task (which I already did with my “pre-smartphone-mobiles”) and I wanted to have a more elegant solution - one that I would not have to repeat for every new device.
Seeing how other family members struggled every time, they had to transfer their contact data each time they purchased a new phone (contacts were stored on a mixture of SIM card, phone-memory and Google-accounts), I decided to implement a centralized solution for myself.

I discovered CardDAV, which seemed to be a reasonable standard that also looked widely adapted.
Looking at the sibling-protocol CalDAV it seemed like a good idea to create a centralized calendar, which I was able to access from different devices.
A further incentive for self-hosting was to have a backup archive for my E-Mail conversations, source code and important documents, which at that time would get a backup at most once or twice a year.

Setup

Self-hosting in the widest interpretation can take different shapes: For example a website at a web hosting provider, a managed or unmanaged rented server or a server that is located at home and reachable through ones own internet provider.

Having a website is nice and easy to setup and to maintain, but it lacked several aspects of my intended goals. For example I was unable to find a reasonable way to archive my E-Mails in this scenario, so this solution was not suitable for me.

Renting an own virtual server at a hosting provider was something that I considered and that seemed to meet my needs, but I decided against that since I did not want to have my private data lying around on the servers of some company.

So my approach was finally to setup a server at home connected to my router that would be accessible from the internet, allowing me to access my contacts, calendar and other services from anywhere.

Hardware

Some companies offer prebuilt and preconfigured servers for exactly this purpose. In order to decide if the functionality they offer suits ones needs, be prepared to invest time to read their manuals. Since I wanted to be as flexible as possible, I decided against such a preconfigured server. However, for someone who gets just started, this might be a reasonable choice.
If the only use case is to access files remotely then a Network-attached storage (NAS) might be all one needs and there are companies that specialize in and sell NAS-servers; a minimal solution in that case would be to use a router, that allows remote access to an attached external hard disk.

Back when I started, smaller Single-board computers were just starting to get traction, so they were not on my radar, but nowadays they seem like a reasonable alternative.

During my research for the hardware of my first server, I had to take several aspects into account.

  • Location: The server would be located at a place where it could be heard even during night, so my first premise was that it had to be absolutely noiseless.
  • Running cost: A server running 24/7 is nice in wintertime for additional heating, but this also means that it would consume power, so I looked into hardware that had minimal power consumption
  • Size: Since I did not need big extensibility for this server, a very small chassis would be my ideal setup, so that I could place it near my router.

I ended up with a Mini ITX board, a passively cooled case without fans, 2GB RAM, a 64GB SSD and a 36W power supply for around 230€.

Two years later it got upgraded to 16GB RAM, 500GB SSD + 60W power supply.

At the moment I am on my third generation server. Unfortunately at the time this upgrade was due, there were no suitable boards available that had low power requirements, so this time I went with a passively cooled 65 TDP-CPU and 120W power supply, even though it is far too overpowered for my needs as a server (currently I am donating some CPU cycles to an open source project).

One issue that I always try to minimize is to risk a hard disk failure. Setting up and restoring the data is a lot of work and so I replace the hard disk at least once every two years.

For setting up the system one should keep in mind, that a screen, a network-cable and a keyboard come in very handy, while during operation I usually have only the network-cable attached, which also allows me to maintain it remotely.

I gave some thought to Wi-Fi-vs-cable, but since cable is much more reliable, I went with that.

Operating System

Operating systems come in many different variants.

  • For me Windows was out of the question, because my intention of using open source software would be much more difficult to implement on such a server.
  • Since I never tried macOS, I lack knowledge about using macOS as operating system for a self-hosting system.
  • The BSD family of operating systems contain good candidates for such a system from what I can tell, but I came in touch with these only a few times.
  • Before I started the whole self-hosting process, I already had experience with Linux, so for me it was natural to install the Linux distribution of my choice on the newly acquired server.

Besides the usual aspects for choosing an operating system (familiarity, hardware support, security, stability, community & support, up-to-dateness) one additional aspect for choosing a distribution should be about software packages. Open source software comes in different shapes in order of difficulty of installation:

  • source code:
    Installing software from the source code is usually the most difficult method, as it requires the installation of all prerequisites necessary to compile and run the software.
  • binary zipped format:
    Installing zipped packages requires one to install additional packages which are essential to run the software.
  • binary distribution package:
    Some open source software products provide packages for certain OS-distributions, that contain metadata about which additional software is required, in order to run that software. This makes an installation simpler, as the dependencies can be installed automatically. While there are several package formats, I would like to set apart deb and RPM because of their widespread usage, which means that software projects are more likely to provide packages in these formats than in other formats.
  • part of the package repository of the operating system:
    This is the easiest way to install software because it is available via the package manager of the operating system.

Choosing a distribution that has a large repository of packages will make it easier to setup the whole system, but this might come with drawbacks like outdated packages.

Usually all operating systems provide adequate information about their installation process, which will guide one through the complete installation.

At one point during the installation there usually is the question for the language, that should be used and two options are likely to cross ones mind: the own native language and English (if they are not the same). The advantage of using ones native language is that it is easier to understand, however there are more documentation and tutorials available for English, so it might be easier to find help, when searching for English terms and error-messages.

Domain names and DynDNS

An easy way to initiate communications to the server from the outside is by using a domain name.

Short excursion:
Computers talk with each other via IP addresses (like 51.15.106.51). Since they are difficult to remember, Domain Name System (DNS) was invented to translate human readable domain names like “community.e.foundation” to their computer readable counterpart “51.15.106.51”. Dynamic-DNS (DynDNS) extends the functionality of DNS servers and allows users to easily and quickly change the IP address of a domain name on the fly.

My dial-up connection resets once every 24 hours which also changes the outside IP-address of my server. So in order to be able to access it consistently from the outside with my chosen domain name, I have to apply the services of a DynDNS provider.

While my chosen DynDNS provider offers DynDNS free of charge for subdomains of a few selected domains, I went for the option of having my own domain name for this server.
Currently the domain name costs me about 10€ per year and the DynDNS service $30 per year.

Some routers also offer free DynDNS services, however mostly they allow only predefined domain names.

Ports & Port forwarding

Servers usually have to talk to multiple different clients at the same time. In order to make this possible, Ports as communication-endpoints were invented. To each IP address ports numbered from 1 to 65535 are associated, through which the computer associated with an IP address can be reached.

Some of these port numbers have a standard usage, for example port 80 is the default port for accessing unencrypted websites; when the URL of a website starts with “http://”, the browser tries to access the server on Port 80. There is a list of default ports on Wikipedia available.

After setting up DynDNS for the dial-up connection (let’s assume that the domain name is example.com), try to reach the server http://example.com with the web browser and most likely one will get no connection or as response something like

Unable to connect to remote host: Connection refused

Let’s see what happens here:

  1. The web browser queries the IP address of the domain “example.com” and gets the according IP address A.B.C.D as response
  2. The web browser tries to get a connection on port 80 on the IP address A.B.C.D, because it is the default port for “http”

So the question is: What is the server that is reachable at the IP address A.B.C.D? It is usually the router that is plugged into the connector in ones wall and most routers in their factory settings deny access on all ports from the outside.
At this point it is necessary to forward the request of the web browser on port 80 from the router to ones server. This can be configured in the administrative panel of the router and is called Port forwarding.
For port forwarding a few details are necessary:

  • The external port: in the example above: 80
  • Protocol: Most applications use TCP, so this should be selected and others (e.g. UDP) ignored.
  • IP address of the target: internal IP address of the new server (the administrative panel of the router usually contains a list of all connected devices and their respective IP addresses)
  • Port of the target: the default 80 is a reasonable choice

After setting up a web server on ones server, reloading the website in the web browser should result in the display of a welcome web page. Web servers usually provide a simple welcome website in their default configuration when accessed on port 80.

Let’s see what happens in this case:

  1. The web browser queries the IP address of the domain “example.com” and gets the according IP address A.B.C.D as response
  2. The web browser tries to get a connection on port 80 on the IP address A.B.C.D, because it is the default port for “http”
  3. The router receives a connection on port 80 and forwards it to the server on port 80
  4. On port 80 of the server the web server listens and gives as answer the content of the welcome website
  5. The answer is transferred to the router and the router sends the answer back to the web browser
  6. Finally the web browser displays the website and one can watch it

Finally the own server is accessible from the internet.

16 Likes

Self-hosting - my journey - Part 2 of 3: Software for delivering services

While the first part covered setting up a server, this part is all about the software that provides the services that I want to utilize.
The third and final part will outline additional related topics and conclude this series.

Up until this point I tried to keep my personal bias out of the text, but in order to describe the software, that I used, I will be unable to do so.

Primary Services

Now that I had the system up and running, it was time to setup my services.

Calendar & Contacts

It is of course possible to use text files to store contacts and calendar information.
However in order to be able to synchronize these information in a convenient way, the state of the art is to use the CalDAV and CardDAV protocols.

They work in form of a Client-server model and in order to use them two components are needed:

  • a server which hosts calendar & contact information
  • one or more clients that access the server

A few of the open source CalDAV/CardDAV-servers, which I came in contact with are:

CalDAV/CardDAV-Clients are available for nearly all operating systems. /e/ supports them out of the box.

Some software packages combine the server-component with a web client-component, allowing direct access to calendar & contacts via web browser. Examples for that are SOGo and Nextcloud.

When I started out, SOGo made a more appealing impression on me than the alternatives and even had a nice GUI for accessing calendar, contacts and Email. So I decided to utilize SOGo. In order to get it running, I had to additionally setup a database (for storing all data/contacts/calendar/…), an Email server with IMAP and SMTP (for mail access) and an LDAP server (for user accounts, but using a database instead of LDAP would also be possible).

Recently I tried the Nextcloud Calendar and Contacts add-ons. While I prefer the SOGo contact GUI over the Nextcloud contact GUI, I enjoy having all information in one place in Nextcloud. Currently I do not have a clear favorite and time will tell on what server I will settle with.

As for clients I had no problem accessing calendar & contacts with Thunderbird + CardBook + Lightning or with /e/.

Email archive

I consider myself a beginner regarding Email servers, since I spent only very little time researching this area, so please keep in mind that there might be solutions available, which are better suited or easier to set up.

For Emails three components have to be accounted for:

  • Mail storage server (IMAP)
  • Mail sender server (SMTP)
  • Client to access the server

POP3 might be considered as an alternative for IMAP, but in my mind IMAP is more predictable when it comes to multiple clients.

I settled down on Dovecot as my IMAP server, because it promised to be flexible and administration friendly and indeed setting up my mail archive in a reasonable way was quite easy.

As SMTP server I went with Postfix and configured it as a security precaution in a way that only local mail is delivered and that it never sends mails outward. To be honest I do not remember why I chose Postfix, but since it works, I do not intend to change it.

The SMTP server is used to only send local maintenance information.

Any Email-client, that is proficient in IMAP, can access Dovecot. The access rights are configured in the LDAP server, but Dovecot and Postfix allow also other methods for verifying users.
Currently I access the Email archive via Thunderbird at home and via the SOGo web client. I tried the Nextcloud add-ons Mail and RainLoop, but Mail has difficulties displaying formatted mails and RainLoop has problems with the big number of folders (>100) in my mail archive, so neither of them are viable options for me at this time.

Mail servers can be accessed unencrypted and encrypted. The encrypted IMAP variant is called IMAPS and its default port is 993. In order to set up the mail server with encryption, a Certificate Authority is necessary (see Part 3).

File sharing

File sharing was not part of my initial setup until sharing travel photos in a convenient way (not putting a zip-file onto a web server) became more important for me.
I started with Owncloud which at that time was the leading open source solution.

Shortly after that, one of the cultural-nonprofit organizations, which I support, became interested in creating a common file sharing solution for the whole team. So I did set up an Owncloud instance for them which was positively accepted. At that time I had a look at a few alternatives: SyncThing, Seafile and the Community Edition of Alfresco.
Two weeks after the rollout I really got frustrated, because the initiator of Owncloud announced that he would leave Owncloud and start anew with Nextcloud. As it is often the case with such projects that have mostly a single person as driving force behind them, I had the feeling that Owncloud would stagnate and that Nextcloud would receive way more innovation and I wanted to switch immediately to Nextcloud, which I did with my private file sharing-instance. Fast forward a year and I convinced the team of my nonprofit to make the switch to Nextcloud. This reduced my administrative overhead, since I no longer needed to take care of two different file sharing solutions, but could concentrate on Nextcloud only.

There is an add-on available for Nextcloud that allows using LDAP for managing access rights, so I did set it up that way. Further it requires a database for storing its general data and it also can be set up to utilize the Email server.

Nextcloud added multiple functionalities since then, like videotelephony or simultaneous online editing of an office document by multiple persons just to name two of them.

Source code repository

Whenever I develop or write something, GIT is the version control system of my choice.
And since hosting my private source code repositories publicly was not after my taste, I set up my own Git repository.

In the beginning I used Gitolite. It does not provide a web interface, but there is not a more secure Git repository server available, because it relies exclusively on Secure Shell to handle access.

Later I switched to a Git repository with a web interface.
One of my candidates was gitlab, which you might know from https://gitlab.e.foundation/ but I discarded it, since it requires a lot of memory.
Instead I installed Gogs, a lightweight alternative. Before using Gogs one should be aware that it is developed mostly by a single person, which means that if this person stops the development, the risk is very high that the project will stall and get abandoned.

Gogs requires access to a database for storing information about accounts and git-repositories. Nowadays it is also possible to use LDAP for authentication.

Supporting Services

As already mentioned, some of the services that are hosted on my server rely on functionality provided by different software packages. They are explained in this chapter.

Database

When talking about open source relational SQL databases, Postgresql and MariaDB (fork of MySQL) are the two big players (with a honorable mention of SQLite for special use cases). Nearly all mentioned software packages that needed a database were able to make use of either of them. So it is a matter of personal choice, which one of them to use. Even setting up both of them for different purposes is possible.

These database-servers also provide the functionality to access their data in encrypted form, for which a Certificate Authority is necessary (see Part 3).

For both databases there are administrative web applications available (phpPgAdmin & phpMyAdmin).

LDAP

Instead of trying to explain what Lightweight Directory Access Protocol (LDAP) is, I just want to refer to its specification, because it covers many different use cases.

As LDAP server I did install OpenLDAP and use it to configure which accounts have access to what services and to store user passwords.

The big advantage of LDAP is, that nearly all big software packages can use it as a standardized backend for authenticating users, so here is a central place to store user accounts and passwords - this also means that I can access nearly all my services with the same password and in order to change the password, I only need to change it once for all services.

In order to set up the LDAP-server I was forced to climb the steepest learning curve by a large margin, which I have encountered so far in any software package. So if you enjoy steep learning curves, go for it! (remember when I talked about letting the server stand for a week?)

Besides OpenLDAP there are also other LDAP servers available.

OpenLDAP also provides the option to access data in encrypted form, for which a Certificate Authority is necessary (see Part 3).

Currently I use PhpLDAPadmin for administrative LDAP tasks, although it has not been maintained for more than six years. To me LDAP Admin, a LDAP-client for Windows, was also appealing. Finding a suitable replacement is currently not on my current agenda, so I will be unable to indicate good alternatives.

Since PhpLDAPadmin did not receive any security updates for a long time I do not expose this web application directly to the internet, which should mitigate most of the possible problems.

Web server

Most of the primary services on the server are web applications e.g. Nextcloud or SOGo. In order to use them, a web server is required.
Currently two of the most used web servers are Apache and Nginx. In large parts they provide the same functionality and it is up to ones personal preference, which of them to use.

When accessing web pages you might have seen either “http://” or “https://” as prefix of the domain name. The difference between the two is that http is unencrypted and https is encrypted. When setting up a website always ensure to use the encrypted version. The standard port for http is 80 and for https 443.

In order to host an encrypted website a Certificate Authority is necessary (see Part 3).

Maintenance Access

Administrating a self-hosting server requires maintenance. This can have different reasons. May it be that security updates for packages are available or one of the required programs stopped working because a hard disk partition had no empty space left.

In such cases Secure Shell (SSH) is the weapon of choice for admins. This program allows shell access to the server from a remote system.

In order to use SSH in a secure way, one should always verify that the connection is established with the correct server. This is done by verifying the host key of the server.
When connecting for the first time to a remote server via SSH, one is prompted to verify the ECDSA server key fingerprint (besides ECDSA there are also others like RSA or ed25519):

user@myclient:~# ssh myserver
The authenticity of host 'myserver (10.0.0.2)' can't be established.
ECDSA key fingerprint is SHA256:meooUCMijdXs+vAHta7KgHqXZXUqMZf+nnwkIDVsOj4.
Are you sure you want to continue connecting (yes/no)?

So how can one verify that this is the correct server? Option 1: ask the admin of the server (and don’t accept “can’t tell you” as an answer!). Option 2: if you are the admin of the server, then you can find it out with the command

user@myserver:~$ ssh-keygen -l -f /etc/ssh/ssh_host_ecdsa_key.pub
256 SHA256:meooUCMijdXs+vAHta7KgHqXZXUqMZf+nnwkIDVsOj4 root@myserver (ECDSA)

The place where public keys are located ("/etc/ssh/ssh_host_ecdsa_key.pub") may differ from OS to OS. Consult the OS-documentation or the search engine of your choice in that case.
Some clients ask to verify the md5 fingerprint instead of the currently preferred SHA-256 fingerprint.

In the example above, the fingerprints match, so it is safe to accept the key fingerprint.
If the key fingerprint is accepted, it will get cached and does not need to be verified again.

If they do not match, then this indicates some problem; in that case the worst that can happen is, that someone tries to attack the server and trick you into giving up your credentials, which will compromise the server. So you should always take the time to make sure that the server key fingerprints match.

Monitoring

Sometimes it is really nice to know when things do not work as they should. Monitoring software helps identifying such issues, so that they can be taken care of.
Initially I started with Nagios, but nowadays I use Icinga, because Nagios was no longer maintained for my operating system.

Icinga requires a database backend to store its data and can use LDAP for access control to its web interface. It uses the SMTP server for delivering notifications to my mailbox.

More than 95% of the notifications that I receive are about available software package upgrades, so I am not sure if the effort of setting up monitoring was worth the effort, as there are likely simpler ways to receive these upgrade notifications. However, the other 5% probably make it worth.

Besides the mentioned monitoring solutions, there are many more alternatives for different needs.

10 Likes

Self-hosting - my journey - Part 3 of 3: Related topics and conclusion

Part 1 and part 2 outlined setting up the system and introduced software packages.
In this third and final part a few related topics are covered.

Certificate Authority

For some of the software packages I mentioned that a Certificate Authority is needed in order to access their data with an encrypted communication.
What does this mean?

Let us take the web server - web browser combination as example, but the same is valid for other client-server applications like E-Mail (IMAP & SMTP).

Web browsers display a small icon (for example a green padlock) to indicate that the communication with the web server is secure. How do they know this?

Web browsers have a built in list of certificate authorities that they trust (the process of how a certificate authority becomes trusted is way beyond this essay). When they encounter an encrypted website like https://example.com, they verify that the encryption was done with the blessing of one of the trusted certificate authorities. In that case they acknowledge the encrypted connection as secure.
A web server can get the blessing of a certificate authority by proving that he is indeed responsible for the domain example.com. Also note that such blessings are always linked to domain names and never to IP addresses.
When web browsers encounter an encrypted website for which they cannot verify the blessing, then they show a warning message to the user - you might encounter selfsigned or socalled snakeoil certificates in this context.

How does the process for getting the blessing of a certificate authority work and how to find a certificate authority that is trusted by web browsers?

When I first started self-hosting, only commercial certificate authorities were available, which are quite costly. So I setup my own certificate authority, issued my own blessing for my domains and added my certificate authority into my web browsers and Email clients. This allowed me to access my website with my web browsers without warning messages; however accessing my website with other browsers always resulted in warning messages.

Fortunately with Let’s Encrypt a free service has emerged that is widespread trusted by web browsers and that offers a simple process for creating the necessary blessing.

The procedure for setting up encrypted websites consists of the following steps:

  1. Install a Let’s Encrypt client. certbot is the recommended tool and it is available for most operating systems.
  2. Follow the documentation of certbot in order to receive a key and a certificate for the domain names.
  3. Configure the web server to use these keys and certificates.

For digging deeper into technical aspects of website-encryption, have a look at TLS.

Networking

Previously I mentioned that IP addresses have the form “51.15.106.51”. That is only half the truth. They come in two different variants:

  • IPv4: they look like “51.15.106.51” (four numbers between 0 and 255 separated by periods)
  • IPv6: they look like “fd12:3456:789a:1::ba73” or “::1” (groups of hexadecimal numbers separated by colons)

The invention of IPv6 was initiated 1992 to mitigate the problem that IPv4 did not contain enough addresses.

When setting up a home network, it is save to use only IPv4 addresses at the moment, but the migration to IPv6 is an ongoing progress.

So the question rises of how a server gets its IP address?
There are two ways:

  • manually set the IP address of the server
  • let an authority distribute IP addresses.

Usually a router is such an authority and the protocol that is used to assign IP addresses to devices is called DHCP. DHCP works (very simplified) in the following way:
A device asks over its network connection if there is a DHCP server available to assign IP addresses. The DHCP server receives this message, selects a unique IP address for this device and sends a message back to the device containing the selected IP address. Now the device knows the IP address it is assigned to and can start communicating with this IP address.

When setting up the IP addresses manually, you should be aware of Private networks, Subnetworks and Broadcasting. Routers in their function as DHCP-server are usually preconfigured in a way that they handle these aspects correctly in their default settings.
If you want to dig deeper into networking, also have a look at Routing, but it will not be necessary for a basic server setup.

Encapsulation

Encapsulation is a method that allows multiple programs to run - in theory - independently from each other. This means, if one program gets hacked, the encapsulation makes it more difficult for an attacker to have an impact on other programs. The term Virtualization is closely related to this topic.

One way to do this is to create a virtual computer on a computer. Look here for a list of software solutions, which provide this functionality.

Another approach is to use containers to separate programs from each other.
You might encounter software packages, which offer docker-images as installation variant. They use this container approach - mostly for simplifying the installation process.

The main reason for me to leave behind my first server was that the CPU did not support virtual computers.
On my second self-hosting server I used XEN while on my current I use KVM, because I wanted to try something new. Both can be considered mature and I did not notice any problems with any of them.

Security

When you have a server that is accessible from the internet people will try to hack it.

If you make a server accessible from the internet, you and only you are responsible for taking security precautions.

I want to provide a few pointers that can be taken into consideration in order to improve the security of a server. I explicitly claim that this is NOT a complete list of possible security measures.

Reading software manuals: Some software packages have a documentation that includes recommendations for setting up the software in a more secure way. As example here is a link to the Security guide of Nextcloud 14. Also entering search strings like “software name + security advice” into the search engine of your choice will reveal interesting aspects to keep in mind.

Security Updates: Software packages receive security updates. Try to install them always as soon as possible.

Encryption: When deciding which communication on the server should be encrypted, I recommend to encrypt all outside communication. For all internal communication (for example between a proxy server and a web server or between an application and the according database) it is advisable to assess the risks associated with unencrypted communication and act accordingly.

Choice of Ports: Many attackers try their attacks only on the default ports. Changing for example the default SSH-Port from 22 to a different port like 56789 will likely reduce the number of attacks on ones server. Open only the minimal number of ports and shutdown all others, in order to minimize the attack surface on the server.

Intrusion prevention: Intrusion prevention software like fail2ban helps to identify and prohibit unauthorized remote access.

SSH: Do not allow root to login via SSH by password; instead use SSH Key Authentication + passphrase.
Always verify the host key fingerprints when connecting to new servers.

Firewall: With firewalls specific rules can be enforced regarding communication with the server.

Backup & Restore

Implement a process for backup of your server data (both your private data and server configuration). If you don’t, you will cry!
Since I utilize self-made minimal backup procedures, that I deem unfit for distribution or recommendation, I am currently not in a position to give advice on backup software.

Since backup is of no use without restoration, at least one restore-dry run should be conducted to verify that it works and to familiarize oneself with the procedure.

VPN

Local networks have the advantage that is makes it more difficult for attackers, to infiltrate them, because they require physical access. However when computers of a network are distributed over the whole world, then attackers can more easily interact with the communication between these computers.
VPN is intended to cryptographically secure the communication between computers in a way that they appear to be in a local network, even though they can be physically distributed over the whole world.

Server Management

Having multiple virtual machines implies that some tasks have to be performed on all virtual machines. In this case is it beneficial to have functionality for easily managing all virtual machines. parallel-ssh or more sophisticated setups like Puppet can help in that regard.

Availability

When hosting a server, then there is always the possibility that something does not work and the server is not accessible. One should always be able to cope with such a situation and not depend on the availability of the server for critical activities. Companies mitigate such risks by applying High availability measures.

Alternative approaches

My server is setup in a very individual way. Fortunately, some less labor-intensive alternatives have emerged since I started.

Just recently YunoHost was brought to my attention, which seems to be a promising self-hosting alternative for people who do not want to take on the task of administrating an operating system. I did not test YunoHost, but it seems to provide a self-hosting solution on ones own hardware that tries to minimize the administrative overhead. This solution might be considered instead of installing and configuring OS and software packages oneself.

/e/ also has a Docker image on their TODO-list, which should make the self-hosting process easier:

we will provide /e/ services packaged as Docker images so that users can install the service on their own servers for self-hosting their mail, online storage, calendar etc.

The recently announced Solid Pods by Sir Tim Berners-Lee is something that I will keep an eye on.

Conclusion and final thoughts

Please let me know if you find any errors or inaccuracies, I intend to keep this text updated.

Now that I have established a server for my own data, I am not inclined to go back, even though its upkeep costs time and money.

Switching to one of the mentioned alternative approaches is tempting in order to reduce my maintenance work, but for the moment I don’t intend to do that - perhaps I will setup an additional virtual server, where I install and try YunoHost.
But it is good to see, that more initiatives are created which aim at making self-hosting easier for people who don’t want to get involved too deep into technical details.

When I look at my current self-hosting solution, the two things that I would like to advance most are:

  • At the moment my system contains only a rudimentary backup solution for only the most important parts. I want to replace it with a more elegant solution, which covers everything.
  • Self-hosted videotelephony via Nextcloud is something that I have setup recently and I intend to introduce it to my family and for meetings of my nonprofits.

Further ideas to extend my setup include utilizing HAProxy, Single sign-on, Rocket.chat, Jenkins, Mastodon, a WiKi, a MindMap-Program, …, but there is no incentive for me at the moment to try them out.

Why is https://en.wikipedia.org/wiki/Self-hosting only about compilers? It looks like we witness the formation of a second meaning for this term.

To me, /e/ is an additional step in the right direction to regain control over my privacy.

14 Likes

That topic is so interesting. I want to create my own services (self-hosted mail, cloud, website to begin) with some Raspberry Pis (I don’t want someting already ready and expensive because I just want to learn) and a lot of time (it’s gonna take years before I can control, secure, understand everything and really know what I’m doing), but be able to be independant and self-suffice definitely worth it.

I’ve saved your post for much later, thank you :wink:

5 Likes

I have a Synology NAS which has a lot of features like cardav/caldav, cloud, VPN. I currently only have the card- and caldav features active. But Im thinking of implementing the cloud feature as well.

The caldav feature only works on my internal network. Havent succeed in gwtting it to work while being out of my internal neywork. I have installed the DAVx app for the caldav feature.

Getting caldav to work on your internal network is a good first step.
Not being able to access it from the outside can have many different reasons.

Without knowing more about your setup and your exact problem, I suggest that you look if your router has Port forwarding enabled for the required caldav-port and verify that you access your internal network via the correct IP-address, that is assigned by your Internet-provider.

Maybe the Port forwarding or the DynDNS sections of my post can help you identify why you can not access it from the outside.

I have not tested the DAVx app, because I use the built-in card-/caldav functionality of /e/.

1 Like

I shall check the portforwarding bit, thank you for that. I did not know that eelo has built in caldav? Is that built in the calendar? Because I could not find it anywhere.

You can access CalDAV/CardDAV data of any Nextcloud Server in the settings:
Settings -> Accounts -> Add Account -> WebDAV -> “Login with URL and user name”.
I have tested this functionality only with Nextcloud servers, but not with Synology NAS, so I can not tell, if that will work.

1 Like

Ah yes I found it. If it works I’m gonna drop the davx app. Thanx for the help.

Best thing ever Done… Best of luck :black_flag:

1 Like

BTW is a raspberry pi a good computer to use as a server? if so could someone give me an amazon link for a good model?
Thanks so much :+1:

If the server is just for you yes, if you want ten people connected at the same time maybe not.
Here is the last Raspberry Pi, I bought three of them.
You just have to search for others things you need in order to create your little server.

:joy::ok_hand:Thank you @Anonyme!!

In addition to what Anonyme said, that model has only 1GB of RAM.
Keep in mind that each service (Nextcloud, Mailserver, …), that you want to run on your server, needs RAM and hosting too many services will result in noticeable slowdown or instabilities. But for getting started with a small system it looks like a viable choice.

Yes in order to run multiple services (and make it a little scalable), a cluster can be done with Docker and a few raspberry pi. (My knowledges stop here)

Ok thank you! is there a Pi with more RAM?

Wikipedia has a comparison-page (including Device-specifications) of single-board computers: https://en.wikipedia.org/wiki/Comparison_of_single-board_computers
You should be able to find an answer to your question there.
Otherwise have a look at the Raspberry-Pi website.

So @Markus, what operating system did you go with in the end?

1 Like

I went with https://www.debian.org/ but this is a matter of choice. Use one you feel confident with and which satisfies your requirements about security aspects.

Ok. So it doesn’t matter what version as long as it’s debian-based? that is, you could still install the server software?