OSCI maintained services are maintained using the OSAS/community-cage-infra-ansible repository on GitLab. Contributions are welcome.
This repository contains Ansible playbooks to deploy VMs and configure hosts to provide the service. Only a few secrets (tokens, or emails to avoid SPAM) are kept private using Ansible Vault.
Some communities (RDO, oVirt…) possess their own resources with an internal Infra team; in this case the deployment rules are hosted in their own repositories.
Along with these rules we might need specific packages for certain softwares. For example we use the Mailman 3 repository maintained by a fellow developer in the
mailman3 Ansible role. We also maintain our own repository for important fixes.
- VM Management
- Host Network Setup
- DNS Management
- Post Installation:
Our resources are located in the Community Cage in the RDU datacenter. OSCI is managed as a separate tenant of the Community Cage with the following resources:
All machines with a public IP address are currently reachable through SSH with keys. OSCI admins keys are automatically installed via Ansible, user root access may also be available on a case-by-case basis.
Internal/Management Access (for tenants and OSCI administrators)
Machines in VLANs without publics IPs can be reached through a jump host. Currently it is used to reach the OSCI Internal VLAN and the Tenants Management VLAN.
Each tenant’s administrators should have their SSH key(s) registered first. Access is restricted to specific host+port combinations, except for OSCI admins.
You can reach the host using its internal DNS name or IP using:
ssh -J <tenant>@soeru.osci.io <user>@<target-host>
Port forwarding is allowed to the same host+port targets:
ssh -J <tenant>@soeru.osci.io - L <local-port>:<target-host>:<remote-port> <user>@<target-host>
If the target host’s SSH implementation does not support port forwarding or you only need forwarding you can use this instead:
ssh -N -L <local-port>:<target-host>:<remote-port> <tenant>@soeru.osci.io
To simplify administration of your hosts via Ansible, you can create a
osci_internal_zone group with all hosts in the Internal VLAN, and create
--- ansible_ssh_common_args: "-o ProxyJump=<tenant>@soeru.osci.io"
<tenant> with your tenant login)
OSCI administrators may also jump using
-J firstname.lastname@example.org or
Management Access for OSCI Resources
All bare metal machines or equipments are accessible for administration tasks via the management VLAN:
- SSH gives access to a shell or a CLI (for switches or CMM)
- jump using
- for SuperMicro switches, use
-o Ciphers=aes256-cbc -o PreferredAuthentications=password, but do not use
- jump using
- CMM or blade admin interfaces both allow web UI and IPMI access
In case we totally break network/SSH configuration on bare metal hosts, access via a console server is not possible. Each host is accessible by connecting via SSH on
conserve.adm.osci.io on a specific port (use
|7003||Catatonic Switch A1|
|7004||Catatonic Switch A2|
|7005||Catatonic CMM 2|
|7006||Catatonic CMM 1|
You first need to authenticate with your console server account, and then you can access a direct console on the host.
Using Ctrl-z (even via SSH) allows you to access a menu and quit.
OSCI admins can login as root via SSH to the standard port to access a UNIX shell. The
portaccessmenu command allows you to list available machines and connect to them. Be aware that you need to authenticate with your console server account first even if you’re already logged in. The
configmenu command is used to setup the device (users, groups, ports, ACLs…). The device configuration has been saved (manually) on file.rdu.redhat.com:/mnt/share/OSAS/backups/Conserve/ so please update it if needed.
To create a new user follow these steps:
- System Administration
- User Administration
- enter username
- 3 - Users ( Port access only )
- 1 - Port Access Menu
- enter password twice
- ESC twice