2024-05-15 09:58:50 +02:00
default.nix feat(dns): Split out config 2024-02-23 13:06:15 +01:00
dns.nix feat(plausible): Transfer from web01 to compute01 2024-04-21 21:50:01 +02:00
network.nix feat(meta): Add netbird ip 2024-04-23 13:46:33 +02:00
nixpkgs.nix feat(infra): Rework nixpkgs version management 2024-04-03 22:05:37 +02:00
nodes.nix feat(vault01): Make the fai group admin 2024-05-14 23:48:40 +02:00
options.nix feat(meta): Add more assertions 2024-05-14 23:47:20 +02:00
organization.nix feat(meta): Add Elias 2024-05-14 23:50:20 +02:00 feat(meta): Add README 2024-04-04 13:36:51 +02:00
verify.nix feat(verify): Tweak error message 2024-05-15 09:58:50 +02:00

Metadata of the DGNum infrastructure


The DNS configuration of our infrastructure is completely defined with the metadata contained in this folder.

The different machines have records pointing to their IP addresses when they exist:

  • $node.$ points IN A $ipv4

  • $node.$ points IN AAAA $ipv6

  • v4.$node.$ points IN A $ipv4

  • v6.$node.$ points IN AAAA $ipv6

Then the services hosted on those machines can be accessed through redirections:

  • $ IN CNAME $node.$

or, when targeting only a specific IP protocol:

  • $ IN CNAME ipv4.$node.$
  • $ IN CNAME ipv6.$node.$

Extra records exist for ns, mail configuration, or the main website but shouldn't change or be tinkered with.


The network configuration (except the NetBird vpn) is defined statically.



Machines can use different versions of NixOS, the supported and default ones are specified here.


The nodes are declared statically, several options can be configured:

  • deployment, the colmena deployment option
  • stateVersion, the state version of the node
  • nixpkgs, the version of NixOS to use
  • admins, the list of administrators specific to this node, they will be given root access
  • adminGroups, a list of groups whose members will be added to admins
  • site, the physical location of the node
  • vm-cluster, the VM cluster hosting the node when appropriate

Some options are set automatically, for example:

  • deployment.targetHost will be inferred from the network configuration
  • deployment.tags will contain infra-$site, so that a full site can be redeployed at once


The organization defines the groups and members of the infrastructure team, one day this information will be synchronized in Kanidm.


For a member to be allowed access to a node, they must be defined in the members attribute set, and their SSH keys must be available in the keys folder.


Groups exist only to simplify the management of accesses:

  • The root group will be given administrator access on all nodes
  • The iso group will have its keys included in the ISOs built from the iso folder

Extra groups can be created at will, to be used in node-specific modules.


The meta configuration can be evaluated as a module, to perform checks on the structure of the data.