ceph-mgr orchestrator modules¶
Warning
This is developer documentation, describing Ceph internals that are only relevant to people writing ceph-mgr orchestrator modules.
In this context, orchestrator refers to some external service that provides the ability to discover devices and create Ceph services. This includes external projects such as ceph-ansible, DeepSea, and Rook.
An orchestrator module is a ceph-mgr module (ceph-mgr module developer’s guide) which implements common management operations using a particular orchestrator.
Orchestrator modules subclass the Orchestrator
class: this class is
an interface, it only provides method definitions to be implemented
by subclasses. The purpose of defining this common interface
for different orchestrators is to enable common UI code, such as
the dashboard, to work with various different backends.
Behind all the abstraction, the purpose of orchestrator modules is simple: enable Ceph to do things like discover available hardware, create and destroy OSDs, and run MDS and RGW services.
A tutorial is not included here: for full and concrete examples, see the existing implemented orchestrator modules in the Ceph source tree.
Glossary¶
- Stateful service
a daemon that uses local storage, such as OSD or mon.
- Stateless service
a daemon that doesn’t use any local storage, such as an MDS, RGW, nfs-ganesha, iSCSI gateway.
- Label
arbitrary string tags that may be applied by administrators to nodes. Typically administrators use labels to indicate which nodes should run which kinds of service. Labels are advisory (from human input) and do not guarantee that nodes have particular physical capabilities.
- Drive group
collection of block devices with common/shared OSD formatting (typically one or more SSDs acting as journals/dbs for a group of HDDs).
- Placement
choice of which node is used to run a service.
Key Concepts¶
The underlying orchestrator remains the source of truth for information about whether a service is running, what is running where, which nodes are available, etc. Orchestrator modules should avoid taking any internal copies of this information, and read it directly from the orchestrator backend as much as possible.
Bootstrapping nodes and adding them to the underlying orchestration system is outside the scope of Ceph’s orchestrator interface. Ceph can only work on nodes when the orchestrator is already aware of them.
Calls to orchestrator modules are all asynchronous, and return completion objects (see below) rather than returning values immediately.
Where possible, placement of stateless services should be left up to the orchestrator.
Completions and batching¶
All methods that read or modify the state of the system can potentially be long running. To handle that, all such methods return a Completion object. Orchestrator modules must implement the process method: this takes a list of completions, and is responsible for checking if they’re finished, and advancing the underlying operations as needed.
Each orchestrator module implements its own underlying mechanisms for completions. This might involve running the underlying operations in threads, or batching the operations up before later executing in one go in the background. If implementing such a batching pattern, the module would do no work on any operation until it appeared in a list of completions passed into process.
Some operations need to show a progress. Those operations need to add a ProgressReference to the completion. At some point, the progress reference becomes effective, meaning that the operation has really happened (e.g. a service has actually been started).
-
Orchestrator.
process
(completions)¶ Given a list of Completion instances, process any which are incomplete.
Callers should inspect the detail of each completion to identify partial completion/progress information, and present that information to the user.
This method should not block, as this would make it slow to query a status, while other long running operations are in progress.
-
class
orchestrator.
Completion
(_first_promise=None, value=<object object>, on_complete=None, name=None)¶ Combines multiple promises into one overall operation.
Completions are composable by being able to call one completion from another completion. I.e. making them re-usable using Promises E.g.:
>>> return Orchestrator().get_hosts().then(self._create_osd)
where
get_hosts
returns a Completion of list of hosts and_create_osd
takes a list of hosts.The concept behind this is to store the computation steps explicit and then explicitly evaluate the chain:
>>> p = Completion(on_complete=lambda x: x*2).then(on_complete=lambda x: str(x)) ... p.finalize(2) ... assert p.result = "4"
or graphically:
+---------------+ +-----------------+ | | then | | | lambda x: x*x | +--> | lambda x: str(x)| | | | | +---------------+ +-----------------+
-
fail
(e)¶ Sets the whole completion to be faild with this exception and end the evaluation.
-
property
has_result
¶ Has the operation already a result?
For Write operations, it can already have a result, if the orchestrator’s configuration is persistently written. Typically this would indicate that an update had been written to a manifest, but that the update had not necessarily been pushed out to the cluster.
- Returns
-
property
is_errored
¶ Has the completion failed. Default implementation looks for self.exception. Can be overwritten.
-
property
is_finished
¶ Could the external operation be deemed as complete, or should we wait? We must wait for a read operation only if it is not complete.
-
property
needs_result
¶ Could the external operation be deemed as complete, or should we wait? We must wait for a read operation only if it is not complete.
-
property
progress_reference
¶ ProgressReference. Marks this completion as a write completeion.
-
property
result
¶ The result of the operation that we were waited for. Only valid after calling Orchestrator.process() on this completion.
-
result_str
()¶ Force a string.
-
-
class
orchestrator.
ProgressReference
(message, mgr, completion=None)¶ -
completion
= None¶ The completion can already have a result, before the write operation is effective. progress == 1 means, the services are created / removed.
-
property
progress
¶ if a orchestrator module can provide a more detailed progress information, it needs to also call
progress.update()
.
-
Placement¶
In general, stateless services do not require any specific placement rules, as they can run anywhere that sufficient system resources are available. However, some orchestrators may not include the functionality to choose a location in this way, so we can optionally specify a location when creating a stateless service.
OSD services generally require a specific placement choice, as this will determine which storage devices are used.
Error Handling¶
The main goal of error handling within orchestrator modules is to provide debug information to assist users when dealing with deployment errors.
-
class
orchestrator.
OrchestratorError
¶ General orchestrator specific error.
Used for deployment, configuration or user errors.
It’s not intended for programming errors or orchestrator internal errors.
-
class
orchestrator.
NoOrchestrator
(msg='No orchestrator configured (try `ceph orchestrator set backend`)')¶ No orchestrator in configured.
-
class
orchestrator.
OrchestratorValidationError
¶ Raised when an orchestrator doesn’t support a specific feature.
In detail, orchestrators need to explicitly deal with different kinds of errors:
No orchestrator configured
See
NoOrchestrator
.An orchestrator doesn’t implement a specific method.
For example, an Orchestrator doesn’t support
add_host
.In this case, a
NotImplementedError
is raised.Missing features within implemented methods.
E.g. optional parameters to a command that are not supported by the backend (e.g. the hosts field in
Orchestrator.update_mons()
command with the rook backend).Input validation errors
The
orchestrator_cli
module and other calling modules are supposed to provide meaningful error messages.Errors when actually executing commands
The resulting Completion should contain an error string that assists in understanding the problem. In addition,
Completion.is_errored()
is set toTrue
Invalid configuration in the orchestrator modules
This can be tackled similar to 5.
All other errors are unexpected orchestrator issues and thus should raise an exception that are then
logged into the mgr log file. If there is a completion object at that point,
Completion.result()
may contain an error message.
Excluded functionality¶
Ceph’s orchestrator interface is not a general purpose framework for managing linux servers – it is deliberately constrained to manage the Ceph cluster’s services only.
Multipathed storage is not handled (multipathing is unnecessary for Ceph clusters). Each drive is assumed to be visible only on a single node.
Host management¶
-
Orchestrator.
add_host
(host)¶ Add a host to the orchestrator inventory.
- Parameters
host – hostname
-
Orchestrator.
remove_host
(host)¶ Remove a host from the orchestrator inventory.
- Parameters
host – hostname
-
Orchestrator.
get_hosts
()¶ Report the hosts in the cluster.
The default implementation is extra slow.
- Returns
list of InventoryNodes
Inventory and status¶
-
Orchestrator.
get_inventory
(node_filter=None, refresh=False)¶ Returns something that was created by ceph-volume inventory.
- Returns
list of InventoryNode
-
class
orchestrator.
InventoryFilter
(labels=None, nodes=None)¶ When fetching inventory, use this filter to avoid unnecessarily scanning the whole estate.
- Typical use: filter by node when presenting UI workflow for configuring
a particular server. filter by label when not all of estate is Ceph servers, and we want to only learn about the Ceph servers. filter by label when we are interested particularly in e.g. OSD servers.
-
class
ceph.deployment.inventory.
Devices
(devices)¶ A container for Device instances with reporting
-
class
ceph.deployment.inventory.
Device
(path, sys_api=None, available=None, rejected_reasons=None, lvs=None, device_id=None)¶
-
Orchestrator.
describe_service
(service_type=None, service_id=None, node_name=None, refresh=False)¶ Describe a service (of any kind) that is already configured in the orchestrator. For example, when viewing an OSD in the dashboard we might like to also display information about the orchestrator’s view of the service (like the kubernetes pod ID).
When viewing a CephFS filesystem in the dashboard, we would use this to display the pods being currently run for MDS daemons.
- Returns
list of ServiceDescription objects.
-
class
orchestrator.
ServiceDescription
(nodename=None, container_id=None, container_image_id=None, container_image_name=None, service=None, service_instance=None, service_type=None, version=None, rados_config_location=None, service_url=None, status=None, status_desc=None)¶ For responding to queries about the status of a particular service, stateful or stateless.
This is not about health or performance monitoring of services: it’s about letting the orchestrator tell Ceph whether and where a service is scheduled in the cluster. When an orchestrator tells Ceph “it’s running on node123”, that’s not a promise that the process is literally up this second, it’s a description of where the orchestrator has decided the service should run.
Service Actions¶
-
Orchestrator.
service_action
(action, service_type, service_name=None, service_id=None)¶ Perform an action (start/stop/reload) on a service.
Either service_name or service_id must be specified:
If using service_name, perform the action on that entire logical service (i.e. all daemons providing that named service).
If using service_id, perform the action on a single specific daemon instance.
- Parameters
action – one of “start”, “stop”, “reload”, “restart”, “redeploy”
service_type – e.g. “mds”, “rgw”, …
service_name – name of logical service (“cephfs”, “us-east”, …)
service_id – service daemon instance (usually a short hostname)
- Return type
OSD management¶
-
Orchestrator.
create_osds
(drive_group)¶ Create one or more OSDs within a single Drive Group.
The principal argument here is the drive_group member of OsdSpec: other fields are advisory/extensible for any finer-grained OSD feature enablement (choice of backing store, compression/encryption, etc).
- Parameters
drive_group – DriveGroupSpec
all_hosts – TODO, this is required because the orchestrator methods are not composable Probably this parameter can be easily removed because each orchestrator can use the “get_inventory” method and the “drive_group.host_pattern” attribute to obtain the list of hosts where to apply the operation
-
Orchestrator.
remove_osds
(osd_ids)¶ - Parameters
osd_ids – list of OSD IDs
destroy – marks the OSD as being destroyed. See OSD Replacement
Note that this can only remove OSDs that were successfully created (i.e. got an OSD ID).
-
class
ceph.deployment.drive_group.
DeviceSelection
(paths=None, model=None, size=None, rotational=None, limit=None, vendor=None, all=False)¶ Used within
ceph.deployment.drive_group.DriveGroupSpec
to specify the devices used by the Drive Group.Any attributes (even none) can be included in the device specification structure.
-
all
= None¶ Matches all devices. Can only be used for data devices
-
limit
= None¶ Limit the number of devices added to this Drive Group. Devices are used from top to bottom in the output of
ceph-volume inventory
-
model
= None¶ A wildcard string. e.g: “SDD*” or “SanDisk SD8SN8U5”
-
paths
= None¶ List of absolute paths to the devices.
-
rotational
= None¶ is the drive rotating or not
-
size
= None¶ Size specification of format LOW:HIGH. Can also take the the form :HIGH, LOW: or an exact value (as ceph-volume inventory reports)
-
vendor
= None¶ Match on the VENDOR property of the drive
-
-
class
ceph.deployment.drive_group.
DriveGroupSpec
(host_pattern, data_devices=None, db_devices=None, wal_devices=None, journal_devices=None, data_directories=None, osds_per_device=None, objectstore='bluestore', encrypted=False, db_slots=None, wal_slots=None, osd_id_claims=None, block_db_size=None, block_wal_size=None, journal_size=None)¶ Describe a drive group in the same form that ceph-volume understands.
-
block_db_size
= None¶ Set (or override) the “bluestore_block_db_size” value, in bytes
-
block_wal_size
= None¶ Set (or override) the “bluestore_block_wal_size” value, in bytes
-
data_devices
= None¶
-
data_directories
= None¶ A list of strings, containing paths which should back OSDs
-
db_devices
= None¶
-
db_slots
= None¶ How many OSDs per DB device
-
encrypted
= None¶ true
orfalse
-
host_pattern
= None¶ An fnmatch pattern to select hosts. Can also be a single host.
-
journal_devices
= None¶
-
journal_size
= None¶ set journal_size is bytes
-
objectstore
= None¶ filestore
orbluestore
-
osd_id_claims
= None¶ Optional: mapping of OSD id to DeviceSelection, used when the created OSDs are meant to replace previous OSDs on the same node. See OSD Replacement
-
osds_per_device
= None¶ Number of osd daemons per “DATA” device. To fully utilize nvme devices multiple osds are required.
-
wal_devices
= None¶
-
wal_slots
= None¶ How many OSDs per WAL device
-
-
Orchestrator.
blink_device_light
(ident_fault, on, locations)¶ Instructs the orchestrator to enable or disable either the ident or the fault LED.
- Parameters
ident_fault – either
"ident"
or"fault"
on –
True
= on.locations – See
orchestrator.DeviceLightLoc
-
class
orchestrator.
DeviceLightLoc
¶ Describes a specific device on a specific host. Used for enabling or disabling LEDs on devices.
hostname as in
orchestrator.Orchestrator.get_hosts()
- device_id: e.g.
ABC1234DEF567-1R1234_ABC8DE0Q
. See
ceph osd metadata | jq '.[].device_ids'
- device_id: e.g.
OSD Replacement¶
See Replacing an OSD for the underlying process.
Replacing OSDs is fundamentally a two-staged process, as users need to physically replace drives. The orchestrator therefor exposes this two-staged process.
Phase one is a call to Orchestrator.remove_osds()
with destroy=True
in order to mark
the OSD as destroyed.
Phase two is a call to Orchestrator.create_osds()
with a Drive Group with
DriveGroupSpec.osd_id_claims
set to the destroyed OSD ids.
Stateless Services¶
-
class
orchestrator.
StatelessServiceSpec
(name, placement=None, count=None)¶ Details of stateless service creation.
Request to orchestrator for a group of stateless services such as MDS, RGW or iscsi gateway
-
Orchestrator.
add_mds
(spec)¶ Create a new MDS cluster
-
Orchestrator.
remove_mds
(name)¶ Remove an MDS cluster
-
Orchestrator.
update_mds
(spec)¶ Update / redeploy existing MDS cluster Like for example changing the number of service instances.
-
Orchestrator.
add_rgw
(spec)¶ Create a new MDS zone
-
Orchestrator.
remove_rgw
(zone)¶ Remove a RGW zone
-
Orchestrator.
update_rgw
(spec)¶ Update / redeploy existing RGW zone Like for example changing the number of service instances.
-
class
orchestrator.
NFSServiceSpec
(name, pool=None, namespace=None, count=1, placement=None)¶
-
Orchestrator.
add_nfs
(spec)¶ Create a new MDS cluster
-
Orchestrator.
remove_nfs
(name)¶ Remove a NFS cluster
-
Orchestrator.
update_nfs
(spec)¶ Update / redeploy existing NFS cluster Like for example changing the number of service instances.
Upgrades¶
-
Orchestrator.
upgrade_available
()¶ Report on what versions are available to upgrade to
- Returns
List of strings
-
Orchestrator.
upgrade_start
(upgrade_spec)¶
-
Orchestrator.
upgrade_status
()¶ If an upgrade is currently underway, report on where we are in the process, or if some error has occurred.
- Returns
UpgradeStatusSpec instance
-
class
orchestrator.
UpgradeSpec
¶
-
class
orchestrator.
UpgradeStatusSpec
¶
Utility¶
-
Orchestrator.
available
()¶ Report whether we can talk to the orchestrator. This is the place to give the user a meaningful message if the orchestrator isn’t running or can’t be contacted.
This method may be called frequently (e.g. every page load to conditionally display a warning banner), so make sure it’s not too expensive. It’s okay to give a slightly stale status (e.g. based on a periodic background ping of the orchestrator) if that’s necessary to make this method fast.
Note
True doesn’t mean that the desired functionality is actually available in the orchestrator. I.e. this won’t work as expected:
>>> if OrchestratorClientMixin().available()[0]: # wrong. ... OrchestratorClientMixin().get_hosts()
- Returns
two-tuple of boolean, string
-
Orchestrator.
get_feature_set
()¶ Describes which methods this orchestrator implements
Note
True doesn’t mean that the desired functionality is actually possible in the orchestrator. I.e. this won’t work as expected:
>>> api = OrchestratorClientMixin() ... if api.get_feature_set()['get_hosts']['available']: # wrong. ... api.get_hosts()
It’s better to ask for forgiveness instead:
>>> try: ... OrchestratorClientMixin().get_hosts() ... except (OrchestratorError, NotImplementedError): ... ...
- Returns
Dict of API method names to
{'available': True or False}
Client Modules¶
-
class
orchestrator.
OrchestratorClientMixin
¶ A module that inherents from OrchestratorClientMixin can directly call all
Orchestrator
methods without manually calling remote.Every interface method from
Orchestrator
is converted into a stub method that internally callsOrchestratorClientMixin._oremote()
>>> class MyModule(OrchestratorClientMixin): ... def func(self): ... completion = self.add_host('somehost') # calls `_oremote()` ... self._orchestrator_wait([completion]) ... self.log.debug(completion.result)
-
set_mgr
(mgr)¶ Useable in the Dashbord that uses a global
mgr
-