The problem

If you use OVH's VPS SSD services, you will probably have noticed that the network is configured via DHCP, and IPv6 addressing is not set up automatically, so your server is only accessible through IPv4 by default.
It also happens that OVH runs a particular networking scheme: the assigned IPs are /32 for v4 and /128 for v6, and the gateway is therefore not on the same subnet in either case.
This means that we need to give the networking stack a little help for it to be able to reach the gateway (see explanation below).

The solution

On Ubuntu 18.04, the preferred network configuration tool changed from ifupdown's /etc/network/interfaces to the new Netplan configuration renderer, which is supposed to abstract your configuration and translate it to any backend syntax, be it for NetworkManager or systemd-networkd (the only two supported backends as of today).

Here is an example /etc/netplan/01-netcfg.yaml from the very VPS this blog is hosted on:

network:
  version: 2
  ethernets:
    ens3:
      dhcp4: no
      dhcp6: no
      addresses:
        - 51.38.112.253/32
        - 2001:41d0:701:1100::b4f/128
      routes:
        - to: 51.38.112.1/32
          scope: link
        - to: 2001:41d0:701:1100::1/128
          scope: link
      gateway4: 51.38.112.1
      gateway6: 2001:41d0:701:1100::1
      nameservers:
        addresses: [8.8.8.8, 8.8.4.4]
      match:
        macaddress: fa:xx:xx:xx:xx:d9
      set-name: ens3
/etc/netplan/01-netcfg.yaml

As you can see, we are configuring our main network interface ens3, disabling DHCP and setting the IPv4 and IPv6 addresses as shown in our OVH server manager with their /32 and /128 masks. Now comes the important part: we add static routes to the gateway addresses, with link scope! (see below)
Also important is setting the nameservers, which in this case are the usual Google Public DNS servers, and matching the interface by MAC address, because the Linux kernel does not guarantee that ens3 will still be called that way after you touch the server hardware (not really an issue for virtualized cloud VPSs, but still, it doesn't hurt).

The explanation

One of the fundamental IP networking protocols is the so-called OSI layer-2 Address Resolution Protocol (ARP). Its purpose is to allow any host to ask other hosts on the same physical network (link) for their MAC address, which is needed together with the IP address for any packet to reach its destination.
It is also assumed that any one host belonging to the same logical network (that which is defined by OSI layer-3 IP subnets) is also connected to the same physical link.
ARP therefore only makes sense within the same physical and logical network, where any host can directly reach any other logically related host with a simple layer-2 broadcast packet (using the broadcast MAC adderss FF:FF:FF:FF:FF:FF, the link layer equivalent of IPv4's 255.255.255.255).

The outside world is obviously not connected to either the same physical link or logical network as the probably n-thousand kilometers away other host, so any time our VPS needs to send a packet to an host out there, it either needs to already know how to reach that specific host, or it needs to find some other host that can forward the packet on its behalf. In the latter case, that other host is called (default) gateway.
Considering what we said earlier, it is only sensible that this default gateway be on the same logical network as our VPS, otherwise it becomes a chicken-and-egg problem.

In OVH's case, assigned IPs are single host addresses that do not belong to any network, so the networking stack has no place to send an ARP query, because it believes there is no host on its logical network other than itself.
The solution is thus telling it that the gateway is physically connected to the same link (remember the scope: link above?) so that the stack feels it is adequate enough to send an ARP query for this extraneous IP, which will get a valid reply thanks to OVH's particular configuration.

In the end, nothing stops you from using a smaller CIDR subnet like /24, tricking the networking stack into thinking that it actually belongs to the same subnet as the gateway; but then your VPS would be sending unwanted broadcast packets to a network that does not exist. Hopefully, anyway, OVH's routing policies are probably going to filter and drop those useless packets, or they will just die with no one replying to them.

If you read all the way through here, thank you. I really hope you found this article useful.