Welcome to networking-dpm’s documentation!

This project provides the OpenStack Neutron mechanism driver and L2 agent for the PR/SM hypervisor of IBM z Systems and IBM LinuxOne machines that are in the DPM (Dynamic Partition Manager) administrative mode.

The DPM mode enables dynamic capabilities of the firmware-based PR/SM hypervisor that are usually known from software-based hypervisors, such as creation, deletion and modification of partitions (i.e. virtual machines) and virtual devices within these partitions, and dynamic assignment of these virtual devices to physical I/O adapters.

The Neutron mechanism driver and L2 agent for DPM components are needed on OpenStack compute nodes for DPM, along with the Nova virtualization driver for DPM.

For details about OpenStack for DPM, see the documentation of the nova-dpm OpenStack project.

Overview

Release Notes

1.0.0

networking-dpm 1.0.0 is the first release of the Neutron mechanism driver and its corresponding l2 agent for the PR/SM hypervisor of IBM z Systems and IBM LinuxOne machines that are in the DPM (Dynamic Partition Manager) administrative mode.

New Features

  • Support for flat networks

Known Issues

  • Only a single adapter can be configured per physical network
  • Always port 0 of an network adapter gets autoconfigured in the guest image. If port 1 should be used, port 0 must be deconfigured and port 1 configured manually in the instance operating system after the instance has launched.
  • All bug reports are listed at: https://bugs.launchpad.net/netwokring-dpm

Using networking-dpm

Installation

The networking-dpm package provides two components:

  • Neutron mechanism driver for DPM
  • L2 agent for DPM

The Neutron mechanism driver for DPM must be registered with the Neutron server on the OpenStack controller node.

The L2 agent for DPM mnust be installed on every OpenStack compute node for DPM.

This section describes the manual installation of these components onto a controller node and compute node that have already been installed by some means.

The networking-dpm package is released on PyPI as package networking-dpm.

The following table indicates which version of the networking-dpm package on PyPI to use for a particular OpenStack release:

OpenStack release networking-dpm version
Ocata 1.x.x

Typically, the networking-dpm package will increase its major version number by one for each new OpenStack release.

If you want to install the package for a particular OpenStack release, it is recommended to use the package that has been released to PyPI, rather than installing from a particular branch of a Git repository.

To do that, identify the major version number for the desired OpenStack release from the table above, and install the latest minor and fix version of the package for that major version, also specifying the global upper constraints file for the desired OpenStack release (the latter ensures that you get the right versions of any dependent packages).

For example, for Ocata:

$ constraints_file=https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/ocata
$ pip install -c$constraints_file "networking-dpm >=1,<2"

If you have good reasons to install the latest not yet released fix level of the networking-dpm package for a particular (released) OpenStack release, install the networking-dpm package from the stable branch of the GitHub repo for that OpenStack release:

For example, for Ocata:

$ constraints_file=https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/ocata
$ pip install -c$constraints_file git+https://git.openstack.org/openstack/networking-dpm@stable/ocata

If you are a developer and want to install the latest code of the networking-dpm package for the OpenStack release that is in development:

$ constraints_file=https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=master
$ pip install -c$constraints_file git+https://git.openstack.org/openstack/networking-dpm@master

The pip commands above install the packages into the currently active Python environment.

If your active Python environment is a virtual Python environment, the commands above can be issued from a userid without sudo rights.

If you need to install the packages into the system Python environment, you need sudo rights:

$ sudo pip install ...

After installing the networking-dpm package, proceed with its Configuration.

Note that you will also need to install and configure the nova-dpm package on the compute node. For its documentation, see http://nova-dpm.readthedocs.io/en/latest/.

Configuration

Neutron DPM mechanism driver

The Neutron DPM mechanism driver itself does not require any DPM-specific configuration options.

But certain well known Neutron ML2 (Modular Layer 2) configuration options in the ML2 configuration file (typically ml2_conf.ini) are required in order to use it.

Enable the DPM mechanism driver

The DPM mechanism driver must be enabled using Neutron ML2 mechanism_drivers config option. Typically the DPM mechanism driver must be configured amongst other mechanisms (like OVS) that are used on the network node or compute nodes managing other hypervisor types (like KVM). The following example enables the ovs and the DPM mechanism driver in Neutrons ML2 config file ml2_conf.ini:

[DEFAULT]
mechanism_drivers = ovs,dpm

More details can be found in the OpenStack Configuration Reference on docs.openstack.org.

Neutron DPM agent

The Neutron DPM agent on the compute node requires DPM-specific options. But also some well known Neutron options can be set.

General Neutron options

The following common Neutron options can be set in the Neutron DPM agent’s configuration file:

  • [default] host
  • [agent] quitting_rpc_timeout
  • [agent] polling_interval

More details can be found in the OpenStack Configuration Reference on docs.openstack.org.

DPM-specific options

Those are the DPM-specific configuration options required by the Neutron DPM agent.

Note

This configuration is auto-generated from the networking-dpm project when this documentation is built. So if you are having issues with an option, please compare your version of networking-dpm with the version of this documentation.

The sample configuration can also be viewed in file form.

[DEFAULT]


[dpm]
#
# Configuration options for IBM z Systems and IBM LinuxONE in DPM (Dynamic
# Partition Manager) administrative mode. A z Systems or LinuxONE machine is
# termed "CPC" (Central Processor Complex). The CPCs are managed via the Web
# Services API exposed by the "HMC" (Hardware Management Console). One HMC can
# manage multiple CPCs.
#
#
# DPM config options for the Neutron agent on the compute node (one agent
# instance for each OpenStack hypervisor host) specify the target CPC, the HMC
# managing it, and the OpenStack physical networks for the OpenStack hypervisor
# host and their backing network adapters and ports in the target CPC.

#
# From networking_dpm
#

#
# The OpenStack physical networks that can be used by this OpenStack hypervisor
# host, and their backing network adapters and ports in the target CPC.
#
# This is a multi-line option. Each instance (line) of the option defines one
# physical network for use by this OpenStack hypervisor host, and the network
# adapter and port that is used for that physical network, using this syntax:
#
# ```
#     <physical-network-name>:<adapter-object-id>[:<port-element-id>]
# ```
#
# * `<physical-network-name>` is the name of the OpenStack physical network.
# * `<adapter-object-id>` is the object-id of the network adapter in the target
#   CPC that is used for this physical network.
# * `<port-element-id>` is the element-id of the port on that network adapter.
#   It is optional and defaults to 0.
#
# The instances (lines) of this option for a particular Neutron agent
#
# * must not specify the same physical network more than once, and
# * must not specify the same adapter and port more than once, and
# * must have all of their physical networks specified in the
#   corresponding `*mappings` config option of the Neutron L2 agent service
#   on all network nodes, and
# * must have all of their physical networks specified in the
#   `ml2.network_vlan_ranges` config option of the Neutron server, if vlan
#   self service networks should be used.
#  (multi valued)
#physical_network_adapter_mappings = physnet1:12345678-1234-1234-1234-123456789a
#physical_network_adapter_mappings = physnet2:12345678-1234-1234-1234-123456789b:1
#physical_network_adapter_mappings = physnet3:12345678-1234-1234-1234-123456789c:0

#
#     Hostname or IP address of the HMC that manages the target CPC (string
# value)
#hmc = <None>

#
#     User name for connection to the HMC (string value)
#hmc_username = <None>

#
#     Password for connection to the HMC (string value)
#hmc_password = <None>

#
#     DPM Object-id of the target CPC (string value)
#cpc_object_id = <None>

Hardware Support

IBM z Systems z13 and LinuxONE

OSA Adapters

OSA adapters
Adapter Feature Codes CHPIDS per adapter Ports per CHPID Total ports
OSA-Express5S 10 GbE [2] #0415, #0416 1 1 1
OSA-Express5S GbE [2] #0413, #0414 1 2 2
OSA-Express5S 1000BASE-T Ethernet [2] #0417 1 2 2
OSA-Express4S GbE [6] #0404, #0405 2 2 4
OSA-Express4S 10 GbE [6] #0406, #0407 1 1 1
OSA-Express4s 1000BASE-T [6] #0408 2 2 4
Adapters with multiple ports per CHPID

Consider the following when using an adapter with multiple ports per CHPID (all 1 GbE adapters):

Due to technical limitations always both adapter ports are available to the operating system in the partition. The OpenStack DPM guest image tools take care of configuring the correct port. However, it is not technically prevented that an administrator of the operating system could deconfigure the current port and configure the other port of the adapter.

Therefore the recommendation is to only wire a single adapter port of such adapters, or wire both into the same network.

Max amount of NICs per adapter

There’s a limit on how many NICs can be created for a single OSA CHPID. This also limits the number of Neutron Ports that correspond to a certain adapter.

  • Available devices per CHPID: 1920 [4]
  • Devices used per NIC: 3
  • = Max number of NICs: 340 NICs.

Note

This is an absolute number for an adapter. If multiple hosts are configured to use the same adapter, they also share the 340 NICs.

Note

Also partitions not used by OpenStack might consume devices of the same adapter configured in OpenStack. The actual number of OpenStack Neutron ports for this adapter decreases accordingly.

Note

The limit is more a theoretical limit, as each of the maximum 85 partitions would need to consume more than 4 NICs on a single adapter, which is very unlikely.

Note

The number can be increased by splitting the CPC into multiple hosts (subsets), where each subset uses a different adapter that is wired into the same physical network.

Hipersockets

An existing hipersockets network can be used as Neutron physical network.

A hipersockets network is scoped to a CEC. But Neutron requires physical networks to be accessible on

  • all non DPM compute nodes that offer access to that network
  • all network nodes (for dhcp, routing and metadata)
  • all DPM partitions attached to that network

Therefore the usage of hipersockets is limited to a single CEC as of today. In the case network node (dhcp, routing, metadata) is required, it must also reside on the CEC.

Max amount of NICs per hipersockets
  • Available devices: 12288 [4]
  • Devices used per network interface: 3
  • = Max number of NICs: 4096 NICs.

This number is not per CHPID, but cross all 32 CHPIDs [5]!

Note

4096 relates to NICs on all existing hipersockets networks on this CEC. If another hipersockets is configured on this CEC, the amount of NICs decreases by the number of already used NICs.

RoCE Adapters

RoCE adapters are currently not supported.

Contributing to the project

Contributing

If you would like to contribute to the development of the networking-dpm project, you must follow the rules for OpenStack contributions described in the “If you’re a developer, start here” section of this page:

Once those steps have been completed, changes to the networking-dpm project should be submitted for review via the Gerrit tool, following the workflow documented at:

Pull requests submitted through GitHub will be ignored.

The Git repository for the networking-dpm project is here:

Bugs against the networking-dpm project should be filed on Launchpad (not on GitHub):

Pending changes for the networking-dpm project can be seen on its Gerrit page:

Specs

Ocata

Initial networking support for OpenStack for DPM

https://bugs.launchpad.net/networking-dpm/+bug/1646095

Problem Description
Requirements
  • The OpenStack user should not care about the adapter type used for its networking.
  • Support Flat networking
  • DPM backed instances should be able to communicate with non DPM backed instances in the same cloud.
  • DPM backed instances should be able to communicate with external network participants not managed by OpenStack.
  • DPM backed instances should be able to use a Neutron router and floating ips.

Out of scope

  • Live migration is not considered
  • Network virtualization via Tunneling (e.g. VXLAN, GRE, STT) and VLAN
Adapters and Adapter Ports

Adapters from DPM API Perspective:

+-------+
|       |
|  HMC  |
|       |
+--+----+
   | 1
   |
   | *
+--+----+       +---------------+       +----------+
|       |1     *|               | 1   * |          |  is a
|  CEC  +-------+ DPM Partition +-------+ DPM NIC  +<------+
|       |       |               |       |          |       |
+---+---+       +---------------+       +-----+----+       |
    | 1                                     ^              |
    |                                  is a |              |
    |                                       |              |
    |                               +-------+----+  +------+------+
    |                               |     DPM    |  |    DPM      |
    |                               |  RoCE NIC  |  | OSA/HS NIC  |
    |                               |            |  |             |
    |                               +--------+---+  +-------------+
    |                                        | *           *|
    |                                        |              |
    |                                        |             1|
    |                                        |       +--------------------+
    |                                        |       |                    |
    |                                        |       | DPM Virtual Switch |
    |                                        |       |                    |
    |                                        |       +----+---------------+
    |                                        |            | 1
    |                                        |            |
    |  *                                     | 1          | 1
+---+---------------+                       ++------------+-----+
|                   | 1                 1+2 |                   |
|      DPM Adapter  +-----------------------+ DPM Network port  |
|                   |                       |                   |
+-------------------+                       +-------------------+

The following DPM objects represent system hardware:

  • CEC
  • DPM Adapter
  • DPM Network port
  • DPM Virtual Switch (OSA & Hipersockets)

Note

A special case is Hipersockets. It’s not hardware but firmware and therefore the corresponding adapter and vswitch objects can get created via the HMC Web Services API as well.

The following DPM objects are dynamic resources that can be created via the HMC WS API:

  • Partition
  • NIC

The HMC istelf is not a DPM object at all. It’s just the management interface hosting the HMC WS API.

OSA Adapter

The DPM API allows attaching a partition to an OSA adapter port. However this attachment is not honored at all. Although the a partition was attached to port 1, the operating system has access to both ports!

The configuration of the adapter port (0 or 1) is from within the Linux via a network devices portno attribute:

cat /sys/devices/qeth/0.0.1530/portno

By default Linux configures the port 0. In order to use port 1, the sysfs attribute must be explicitly changed from within the Linux.

It is not possible to configure both ports in parallel using the same NIC. A separate NIC to the same adapter would need to be assigned to the partition.

RoCE Adapter

The DPM API allows attaching a partition to an RoCe adapter port. However this attachment is not honored at all. Although the a partition was attached to port 1, the operating system has access to both ports!

In contradiction to OSA both ports are assigned to an LPAR and both ports are configured by Linux. However only a single IP address is assigned to both ports, as from Neutron perspective this is a single port!

Hipersockets

Hipersockets is CEC internal network implemented in firmware.

Proposed Change
Supported Adapters
OSA (Open Systems Adapter)
Available OSA adapters on z13
Adapter Feature Codes available on CHPIDS per adapter Ports per CHPID Total ports Supported by DPM OpenStack
OSA-Express5S 10 GbE [2] #0415, #0416 z13 1 1 1 yes
OSA-Express5S GbE [2] #0413, #0414 z13 1 2 2 yes (b)
OSA-Express5S 1000BASE-T Ethernet [2] #0417 z13 1 2 2 yes (b)
OSA-Express4S GbE [6] #0404, #0405 z13 (a) 2 2 4 yes (b)
OSA-Express4S 10 GbE [6] #0406, #0407 z13 (a) 1 1 1 yes
OSA-Express4s 1000BASE-T [6] #0408 z13 (a) 2 2 4 yes (b)

( a ) Available on carry forward only

( b ) Supported with restrictions described in this chapter

Note

All 10 Gbit/s adapters only have 1 port. The special cases are only the 1 Gbit/s adapters.

The multiport issues described in Adapters and Adapter Ports should be documented. For maximum security the recommendation is to only wire port 0 of an multiport adapter or to wire both ports into the same physical network.

For usage of port 1, some logic inside the guest image is required to determine which port should be configured. As of today there’s no way to figure out if port 0 or 1 was chosen from with the Operating System.

10 GbE RoCE (RDMA over Converged Ethernet) Express
Available RoCE adapters on z13
Adapter No. ports per feature (FID) supported
10 GbE RoCE Express (CX3) 2 no

Due to the multiport issues described in Adapters and Adapter Ports the RoCE adapter is not supported at all.

Alternative: Document that both ports must be wired into the same physical network. If that is the case, a bond could be configured on top of those 2 interfaces.

Hipersockets
Hipersockets on z13
Adapter No. ports per feature (CHPID) supported
Hipersockets n/a yes

Due to the facts described in Adapters and Adapter Ports hipersockets is only supported on Single CEC deployments. In order to use the network node (DHCP, Routing, Floating IP, Metadata) it must also be deployed on the same CEC with an attachment to the hipersockets network.

+------------------------------+  +--------------+  +--------------+
|                              |  |              |  |              |
|         Network Node         |  | Instance     |  | Instance     |
|                              |  |              |  |              |
|                              |  |              |  |              |
|  +---------------------+     |  |              |  |              |
|  |      Bridge         |     |  |              |  |              |
|  +------+-----------+--+     |  |              |  |              |
|         |           |        |  |              |  |              |
|  +------+------+    |        |  |              |  |              |
|  | Bond        |    |        |  |              |  |              |
|  +--+-------+--+    |        |  |              |  |              |
|     |       |       |        |  |              |  |              |
|  +--+--+ +--+--+  +-+--+     |  |    +----+    |  |    +----+    |
|  | OSA | | OSA |  | HS |     |  |    | HS |    |  |    | HS |    |
+--+--+--+-+--+--+--+-+--+-----+  +----+-+--+----+  +----+-+--+----+
      |       |       |                  |                 |
      |       |       |                  |                 |
      |       |       |                  |                 |
      |       |       +------------------+-----------------+
      |       |
      +       +
     external network

The OpenStack user is not aware if hipersockets is being used or not.

Note

DPM offers a ReST API to dynamically create a new hipersockets adapter. Neutron will not take use of this DPM ReST API but assumes that the hipersockets network already exists.

Physical networks
Neutron Reference implementations

In the Neutron reference implementations (linuxbridge, ovs, macvtap), the mapping between physical networks and hyperivsor interfaces is a 1:1 mapping.

+------------------+ 1      1 +---------------------------+
| physical network +----------+ hypervisor net-interface  |
+------------------+          +---------------------------+

There is no support for multiple hypervisor interfaces going into the same physical network. To achieve this, those interfaces need to be bonded in the hypervisor, that Neutron again sees a single interface.

Mapping that to DPM

Mapping this to DPM, the mapping between physical networks and adapter-ports must be a 1:1 mapping.

+------------------+ 1      1 +---------------+
| physical network +----------+ adapter-port  |
+------------------+          +---------------+

Consequences:

A physical network can only be backed by a single adapter and there use only a single port.

OSA adapter

1920 devices per CHPID means 1920/3= 340 NICs. See [4] page 10.

-> A physical network can serve 340 NICs on a CEC.

Hipersockets

12288 devices means 12288/3 = 4096 NICs across all 32 Hipersockets networks. See [5] page 8.

-> A physical network can serve a total number of 4096 NICs.

Note

4096 relates to NICs on all existing hipersockets networks on this CEC. If another hipersockets is configured on this CEC, the amount of NICs decreases by the number of already used NICs.

Note

As only the hipersocket bridge solution is supported, the maximum number of NICs available for OpenStack DPM partitions is 4095, as also the bridge partition needs one attachment.

Logical networks

A logical network can be represented by

  • a physical network (= flat provider network)

Note

Explicitly out of scope are VLAN and tunneled networks like VXLAN or GRE.

Neutron Mechanism Driver and L2 Agent

A mechanism driver and a Neutron l2 agent (per CPCSubset) get implemented.

  • Agent

    • Reads config file on startup
    • Looks up virtualswitch object-ids by adapter object-id provided by the configuration
    • Sends status reports to Neutron including the resolved configuration per CPCSubset
    • Checks for added/removed NICs
      • Does additional configuration for the NIC (None to be done in the first release)
      • Reports the Neutron Server about the port configured
  • Mechanism driver

    • Stores all the status information from the agents
    • On portbinding request, it looks up the corresponding agent in the database and adds the relevant information to the response.

Note

As of today, the agent itself does not do any configuration of the NIC. Therefore no polling for new NICs needs to be done. Nova can continue instance start without waiting for the vif-plug event.

Going with an agent looks a bit overkilled, but going this way, we are prepared for the future. Also we can take use of the existing ML2 Framework with it’s AgentMechanismDriver Base classes and eventually the ml2 common agent. Eventually it’s easier to use the polling right from the beginning, as it’s integrated into those existing frameworks.

Another argument for going with this design is keeping the overall node architecture clean. E.g. all compute node related configuration is present on the compute node only.

Alternatives:

  • Go with a mechanism driver (server) only implementation
  • Have one agent per HMC
Neutron mechanism driver (server)
VNIC Type and VIF_TYPE

Use:

VNIC_TYPE='normal'

It should be totally transparent to the user, if Hipersockets or OSA is being used. It should only depend on the admin if a Hipersockets or OSA attachment is used (depending on the configuration).

Use:

vif_type = "dpm_vswitch"

The vif_type determines how Nova should attach the guest to network.

Sequence diagram
  • The Neutron agent (q-agt) frequently sends its configuration to the Neutron server. The relevant pieces are

    • host = CPCSubset host identifier
    • mappings = physical network and
      • OSA/HS: virtual switch object-id
  • On spawn instance, nova compute agent (n-cpu) asks Neutron to create a port

    with the following relevant details

    • host = the CPCSubset host identifier on which the instance should be spawned
  • Neutron server (q-svc) now looks its database for the corresponding agent configuration. It adds the required details to the ports binding:vif_details dictionary. The following attributes are required:

    • virtual switch object-id (OSA, HS)
  • Nova compute creates the Partition (This can also done before the port details are requested).

  • Nova compute attaches the NIC to the partition and waits for the vif-plugged-event

  • Neutron agent detects that this new NIC is available.

    • Neutron agent does configurations on the appeared NIC (optional).
    • Neutron agent reports existence of the device to the Neutron server.
  • The Neutron server sends the vif-plugged-event to Neutron.

  • Nova compute starts the partition.

Neutron configuration

The following configuration is required

  • Mapping from physical network to adapter port
  • HMC Access URL and credentials (depends on Design of configuration options)
Identification of an adapter-port

The configuration specifies an network adapter port along the following parameters:

  • adapter object-id
  • port element-id

This works for all adapters (RoCE, OSA, Hipersockets) in the same manner!

A script should be provided, that helps the administrator to figure out the object-id and the port element-id along the card location parameter or the PCHID.

Alternatives for identifying an adapter port:

  • The card location parameter and port element-id
  • PCHID/VCHID and port element-id
  • OSA/HS: Virtual-switch object-id
Neutron configuration options

There is one Neutron agent per HMC and cloud. Therefore the following configuration is required for the Neutron agent.

The Neutron server does not need configuration options.

HMC access information

hmc =
hmc_username =
hmc_password =

Note

How those options look like is not part of this specification. Neutron would use the same config parameters as Nova. All options that Nova implements need also be implemented by the Neutron agent as well. The shown options are just boilerplate options.

Physical adapter mappings

[dpm]
# List of mappings between physical network, and adapter-id/port combination
# <port element-id> defaults to 0
# physical_adapter_mappings = <physical_network>:<adapter object-id>[:<port element-id>],...
physical_adapter_mappings = physnet1:2841d931-6662-4c85-be2d-9b5b0b76d342:1,
                            physnet2:4a7abde3-964c-4f6a-918f-fbd124c4d7d3

A mapping between physical network and the combination of adapter object-id and port element-id.

References

(Placeholder Spec)

This file is just a placeholder for Ocata specs directory. It will be removed soon after some spec for Ocata is merged.

The latest spec template is found at specs/template.rst in the networking-dpm repository.

Problem Description

Sphinx toctree complains if no file exists in a directory specified in toctree glob.

Proposed Change

Add this file.

References

None.