/usr/share/doc/python-os-cloud-config/html/_sources/usage.txt is in python-os-cloud-config 0.2.6-1.
This file is owned by root:root, with mode 0o644.
The actual contents of the file can be viewed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 | ========
Usage
========
To use os-cloud-config in a project::
import os_cloud_config
-----------------------------------
Initializing Keystone for a host
-----------------------------------
The init-keystone command line utility initializes Keystone for use with normal
authentication by creating the admin and service tenants, the admin role, the
admin user, configure certificates and finally registers the initial identity
endpoint.
.. note::
init-keystone will wait for a user-specified amount of time for a Keystone
service to be running on the specified host. The default is a 10 minute
wait time with 10 seconds between poll attempts.
For example::
init-keystone -o 192.0.2.1 -t unset -e admin@example.com -p unset -u root
That acts on the 192.0.2.1 host, sets the admin token and the admin password
to the string "unset", the admin e-mail address to "admin@example.com", and
uses the root user to connect to the host via ssh to configure certificates.
--------------------------------------------
Registering nodes with a baremetal service
--------------------------------------------
The register-nodes command line utility supports registering nodes with
either Ironic or Nova-baremetal. Ironic will be used if the Ironic service
is registered with Keystone.
.. note::
register-nodes will ask Ironic to power off every machine as they are
registered.
.. note::
register-nodes will wait up to 10 minutes for the baremetal service to
register a node.
The nodes argument to register-nodes is a JSON file describing the nodes to
be registered in a list of objects. If the node is determined to be currently
registered, the details from the JSON file will be used to update the node
registration.
.. note::
Nova-baremetal does not support updating registered nodes, any previously
registered nodes will be skipped.
For example::
register-nodes -s seed -n /tmp/one-node
Where /tmp/one-node contains::
[
{
"memory": "2048",
"disk": "30",
"arch": "i386",
"pm_user": "steven",
"pm_addr": "192.168.122.1",
"pm_password": "password",
"pm_type": "pxe_ssh",
"mac": [
"00:76:31:1f:f2:a0"
],
"cpu": "1"
}
]
----------------------------------------------------------
Generating keys and certificates for use with Keystone PKI
----------------------------------------------------------
The generate-keystone-pki command line utility generates keys and certificates
which Keystone uses for signing authentication tokens.
- Keys and certificates can be generated into separate files::
generate-keystone-pki /tmp/certificates
That creates four files with signing and CA keys and certificates in
/tmp/certificates directory.
- Key and certificates can be generated into heat environment file::
generate-keystone-pki -j overcloud-env.json
That adds following values into overcloud-env.json file::
{
"parameters": {
"KeystoneSigningKey": "some_key",
"KeystoneSigningCertificate": "some_cert",
"KeystoneCACertificate": "some_cert"
}
}
CA key is not added because this file is not needed by Keystone PKI.
- Key and certificates can be generated into os-apply-config metadata file::
generate-keystone-pki -s -j local.json
This adds following values into local.json file::
{
"keystone": {
"signing_certificate": "some_cert",
"signing_key": "some_key",
"ca_certificate": "some_cert"
}
}
CA key is not added because this file is not needed by Keystone PKI.
---------------------
Setting up networking
---------------------
The setup-neutron command line utility allows setting up of a physical control
plane network suitable for deployment clouds, or an external network with an
internal floating network suitable for workload clouds.
The network JSON argument allows specifying the network(s) to be created::
setup-neutron -n /tmp/ctlplane
Where /tmp/ctlplane contains::
{
"physical": {
"gateway": "192.0.2.1",
"metadata_server": "192.0.2.1",
"cidr": "192.0.2.0/24",
"allocation_end": "192.0.2.20",
"allocation_start": "192.0.2.2",
"name": "ctlplane"
}
}
This will create a Neutron flat net with a name of 'ctlplane', and a subnet
with a CIDR of '192.0.2.0/24', a metadata server and gateway of '192.0.2.1',
and will allocate DHCP leases in the range of '192.0.2.2' to '192.0.2.20', as
well as adding a route for 169.254.169.254/32.
setup-neutron also supports datacentre networks that require 802.1Q VLAN tags::
setup-neutron -n /tmp/ctlplane-dc
Where /tmp/ctlplane-dc contains::
{
"physical": {
"gateway": "192.0.2.1",
"metadata_server": "192.0.2.1",
"cidr": "192.0.2.0/24",
"allocation_end": "192.0.2.20",
"allocation_start": "192.0.2.2",
"name": "public",
"physical_network": "ctlplane",
"segmentation_id": 25
}
}
This creates a Neutron 'net' called ``public`` using VLAN tag 25, that uses
the existing 'net' called ``ctlplane`` as a physical transport.
.. note::
The key ``physical_network`` is required when creating a network that
specifies a ``segmentation_id``, and it must reference an existing net.
setup-neutron can also create two networks suitable for workload clouds::
setup-neutron -n /tmp/float
Where /tmp/float contains::
{
"float": {
"cidr": "10.0.0.0/8",
"name": "default-net",
},
"external": {
"name": "ext-net",
"cidr": "192.0.2.0/24",
"allocation_start": "192.0.2.45",
"allocation_end": "192.0.2.64",
"gateway": "192.0.2.1"
}
}
This creates two Neutron nets, the first with a name of 'default-net' and set
as shared, and second with a name 'ext-net' with the 'router:external'
property set to true. The default-net subnet has a CIDR of 10.0.0.0/8 and a
default nameserver of 8.8.8.8, and the ext-net subnet has a CIDR of
192.0.2.0/24, a gateway of 192.0.2.1 and allocates DHCP from 192.0.2.45 until
192.0.2.64. setup-neutron will also create a router for the float network,
setting the external network as the gateway.
----------------
Creating flavors
----------------
The setup-flavors command line utility creates flavors in Nova -- either using
the nodes that have been registered to provide a distinct set of hardware that
is provisioned, or by specifing the set of flavors that should be created.
.. note::
setup-flavors will delete the existing default flavors, such as m1.small
and m1.xlarge. For this use case, the cloud that is having flavors created
is a cloud only using baremetal hardware, so only needs to describe the
hardware available.
Utilising the /tmp/one-node file specified in the register-nodes example
above, create a flavor::
setup-flavors -n /tmp/one-node
Which results in a flavor called "baremetal_2048_30_None_1".
If the ROOT_DISK environment variable is set in the environment, that will be
used as the disk size, leaving the remainder set as ephemeral storage, giving
a flavor name of "baremetal_2048_10_20_1".
Conversely, you can specify a JSON file describing the flavors to create::
setup-flavors -f /tmp/one-flavor
Where /tmp/one-flavor contains::
[
{
"name": "controller",
"memory": "2048",
"disk": "30",
"arch": "i386",
"cpu": "1"
}
]
The JSON file can also contain an 'extra_specs' parameter, which is a JSON
object describing the key-value pairs to add into the flavor metadata::
[
{
"name": "controller",
"memory": "2048",
"disk": "30",
"arch": "i386",
"cpu": "1",
"extra_specs": {
"key": "value"
}
}
]
|