Preview only show first 10 pages with watermark. For full document please download

Planning Guide Introduction To Mirantis Openstack And Fuel 2 | Rich ...

version 5.0 Planning Guide Mirantis OpenStack v5.0 Planning Guide Contents Preface 1 Intended Audience 1 Documentation History 1 Introduction to Mirantis OpenStack and Fuel 2 System Requirements 3 Master Node Hardware Recommendations 3 Node Server Hardware Recommendations 3 Supported Software 4 ...

   EMBED

  • Rating

  • Date

    March 2018
  • Size

    2.8MB
  • Views

    5,302
  • Categories


Share

Transcript

version 5.0 Planning Guide Mirantis OpenStack v5.0 Planning Guide Contents Preface 1 Intended Audience 1 Documentation History 1 Introduction to Mirantis OpenStack and Fuel 2 System Requirements 3 Master Node Hardware Recommendations 3 Node Server Hardware Recommendations 3 Supported Software 4 Planning Summary 6 Choose Network Topology 7 Linux Distribution for Nodes 8 Nodes and Roles 9 Planning a Sahara Deployment 11 Preparing for vSphere Integration 12 vSphere Installation 12 ESXi Host Networks Configuration 13 Limitations 14 Calculate hardware requirements 16 Example of Hardware Requirements Calculation 17 Calculating CPU 17 Calculating Memory 18 Calculating Storage 18 Throughput 19 Remote storage 20 Object storage 20 Calculating Network 20 Scalability and oversubscription 21 Hardware for this example 21 Summary Reference configuration of hardware switches Tagged ports ©2014, Mirantis Inc. 21 22 22 Page i Mirantis OpenStack v5.0 Planning Guide Untagged ports Example 1: HA + Nova-network FlatDHCP manager 22 24 Detailed Port Configuration 25 Nova-network Switch configuration (Cisco Catalyst 2960G) 27 Nova-network Switch configuration (Juniper EX4200) 30 Example 2: HA + Neutron with GRE 37 Detailed port configuration 38 Neutron Switch configuration (Cisco Catalyst 2960G) 38 Neutron Switch configuration (Juniper EX4200) 42 Index ©2014, Mirantis Inc. 49 Page ii Mirantis OpenStack v5.0 Planning Guide Preface Preface This documentation provides information on how to use Mirantis Fuel to deploy OpenStack environment. The information is for reference purposes and is subject to change. Intended Audience This documentation is intended for OpenStack administrators and assumes that you have experience with network and cloud concepts. Documentation History The following table lists the released revisions of this documentation: Revision Date May, 2014 ©2014, Mirantis Inc. Description 5.0 GA Page 1 Mirantis OpenStack v5.0 Planning Guide Introduction to Mirantis OpenStack and Fuel Introduction to Mirantis OpenStack and Fuel OpenStack is an extensible, versatile, and flexible cloud management platform. It is a portfolio of cloud infrastructure services – compute, storage, networking and other core resources — that are exposed through ReST APIs. It enables a wide range of control over these services, both from the perspective of an Integrated Infrastructure as a Service (IaaS) controlled by applications and as a set of tools that enable automated manipulation of the infrastructure itself. Mirantis OpenStack is a productized snapshot of the open source technologies. It includes Fuel, a graphical web tool that helps you to quickly deploy your cloud environment. Fuel includes scripts that dramatically facilitate and speed up the process of cloud deployment, without requiring you to completely familiarize yourself with the intricate processes required to install the OpenStack environment components. This guide provides details to get you started with Mirantis OpenStack and Fuel on a set of physical servers ("bare-metal installation") See the User Guide for detailed instructions about how to download and install Fuel on the Fuel Master Node and then how to use the Fuel interface to deploy your OpenStack environment. Further reading is available in the following documents: • Terminology Reference is an alphabetical listing of technologies and concepts that serves as both a glossary and a master index of information in the Mirantis docs and the open source documentation. • Operations Guide gives information about advanced tasks required to maintain the OpenStack environment after it is deployed. Most of these tasks are done in the shell using text editors and command line tools. • Reference Architecture provides background information about how Mirantis OpenStack and its supporting HA architecture is implemented. You can also run Fuel to deploy a Mirantis OpenStack Environment on Oracle VirtualBox. VirtualBox deployment is useful for demonstrations and is a good way to begin your exploration of the tools and technologies. It is discussed in Running Fuel on VirtualBox. However, it is worth noting that deployments on top of VirtualBox do not generally meet the performance and robustness requirements of most production environments. For community members or partners looking to take Fuel even further, see the developer documentation for information about the internal architecture of Fuel, instructions for building the project, information about interacting with the REST API and other topics of interest to more advanced developers. You can also visit the Fuel project for more detailed information and become a contributor. ©2014, Mirantis Inc. Page 2 Mirantis OpenStack v5.0 Planning Guide System Requirements System Requirements Before you begin installation of Fuel, make sure your hardware meets or exceeds the following minimum requirements. Master Node Hardware Recommendations To install the Fuel Master Node, you should base your hardware on the anticipated load of your server. Logically, deploying more node servers in your environment requires more CPU, RAM, and disk performance. Suggested minimum configuration for installation in production environment: • Quad-core CPU • 4GB RAM • 1 gigabit network port • 128GB SAS Disk • IPMI access through independent management network Suggested minimum configuration for installation in lab environment: • Dual-core CPU • 2GB RAM • 1 gigabit network port • 50GB disk • Physical console access Node Server Hardware Recommendations To help determine the correct sizing for OpenStack Node servers, use the Mirantis Hardware Bill of Materials calculator. For more information on the logic used in the utility and basic directions, see: “How do you calculate how much hardware you need for your OpenStack cloud?”. ©2014, Mirantis Inc. Page 3 Mirantis OpenStack v5.0 Planning Guide Supported Software Supported Software • Operating Systems • CentOS 6.5 (x86_64 architecture only) • Ubuntu 12.04.4 (x86_64 architecture only) • Puppet (IT automation tool) 3.4.2 • MCollective 2.3.3 • Cobbler (bare-metal provisioning tool) 2.2.3 • OpenStack Core Projects • Icehouse release 2014.1 • Nova (OpenStack Compute) • Swift (OpenStack Object Storage) • Glance (OpenStack Image Service) • Keystone (OpenStack Identity) • Horizon (OpenStack Dashboard) • Neutron (OpenStack Networking) • Cinder (OpenStack Block Storage service) • OpenStack Core Integrated Projects • Icehouse release 2014.1 • Ceilometer (OpenStack Telemetry) • Heat (OpenStack Orchestration) • OpenStack Incubated Projects • Icehouse release 2014.1 • Sahara (OpenStack Data Processing) • OpenStack Related Projects • Murano v0.5 • Hypervisor • KVM • QEMU • vCenter • Open vSwitch 1.10.2 • HA Proxy 1.4.24 • Galera 23.2.2 ©2014, Mirantis Inc. Page 4 Mirantis OpenStack v5.0 Planning Guide Supported Software • RabbitMQ 3.2.3 • Pacemaker 1.1.10 • Corosync 1.4.6 • Keepalived 1.2.4 • MongoDB 2.4.6 • Ceph (v0.67.5 "Dumpling") • MySQL 5.5.28 (CentOS), 5.5.37 (Ubuntu) ©2014, Mirantis Inc. Page 5 Mirantis OpenStack v5.0 Planning Guide Planning Summary Planning Summary Before installation, determine the deployment type that is appropriate for you configuration needs. You may want to print this list and make notes indicating your selection so you can be sure you have planned your deployment correctly. The following table provides a list of configuration steps that you must complete to plan the Mirantis OpenStack deployment. Step Description Additional Information Select a network topology See Choose Network Topology Choose the Linux distro to use on your nodes See Linux Distribution for Nodes Determine how many nodes to deploy and which roles to assign to each and the high-availability to implement. See Nodes and Roles Calculate the server and network hardware needed See Calculate hardware requirements Prepare an IP address management plan and network association. Identify the network addresses and VLAN IDs for your Public, floating, Management, Storage, and virtual machine (fixed) networks. Prepare a logical network diagram. ©2014, Mirantis Inc. Page 6 Mirantis OpenStack v5.0 Planning Guide Choose Network Topology Choose Network Topology OpenStack supports two network modes, each of which supports two topologies. For architectural descriptions of the four topologies, see: • Neutron with VLAN segmentation and OVS • Neutron with GRE segmentation and OVS • Nova-network FlatDHCP Manager • Nova-network VLAN Manager Nova-network is a simple legacy network manager. It can operate with predefined Private IP spaces only. • If you do not want to split your VMs into isolated groups (tenants), you can choose the Nova-network with FlatDHCP topology. In this case, you will have one big tenant for all VMs. • If you want use multiple tenants and all of them contain approximately the same number of VMs, you can use the Nova-network with VLANManager topology. In this case, the number of the tenants is predefined and all the tenants have equal amount of Private IP space. You must decide about these two numbers (max number of tenants and Private IP space size) before starting deployment. Also, you must set up appropriate VLANs on your underlying network equipment. Neutron is a modern and more complicated network manager. It can not only separate tenants but it decreases the requirements for the underlying network (physical switches and topology) and gives a great deal of flexibility for manipulating Private IP spaces. You can create Private IP spaces with different sizes and manipulate them on the fly. • The Neutron with VLAN topology, like Nova-network with VLANManager, requires a predefined maximum number of tenants value and underlying network equipment configuration. • The Neutron with GRE topology does not restrict the maximum number of VLANs and you can spawn a very large number of tenants. But GRE encapsulation decreases the speed of communication between the VMs and decreases the CPU utilization of the Compute and Controller nodes. So, if you do not need really fast interconnections between VMs, do not want to predetermine the maximum number of tenants, and do not want to configure your network equipment, you can choose the Neutron + GRE topology. Some other considerations when choosing a network topology: • OVS (Open vSwitch) and Bonding can only be implemented on Neutron. • VMWare vCenter can only be implemented on Nova-network. • Murano is supported only on Neutron. ©2014, Mirantis Inc. Page 7 Mirantis OpenStack v5.0 Planning Guide Linux Distribution for Nodes Linux Distribution for Nodes Fuel allows you to deploy either the CentOS or Ubuntu Linux distribution as the Host O/S on the nodes. All nodes in the environment must run the same Linux distribution. Often, the choice is made based on personal preference; many administrative tasks on the nodes must be performed at shell level and many people choose the distribution with which they are most comfortable. Some specific considerations: • Each distribution has some hardware support issues. See release-notes for details about known issues. • In particular, the CentOS version used for OpenStack does not include native support for VLANs while the Ubuntu version does. In order to use VLANs on CentOS based nodes, you must configure VLAN splinters. • CentOS supports .rpm packages; Ubuntu supports .deb packages. ©2014, Mirantis Inc. Page 8 Mirantis OpenStack v5.0 Planning Guide Nodes and Roles Nodes and Roles Your OpenStack environment contains a set of specialized nodes and roles; see OpenStack Environment Architecture for a description. When planning your OpenStack deployment, you must determine the proper mix of node types and what roles will be installed on each. When you create your OpenStack environment, you will assign a role or roles to each node server. All production environments should be deployed for high availability although you can deploy your environment without the replicated servers required for high availability and then add the replicated servers later. But part of your Nodes and Roles planning is to determine the level of HA you want to implement and to plan for adequate hardware. Some general guiding principles: • When deploying a production-grade OpenStack environment, it is best to spread the roles (and, hence, the workload) over as many servers as possible in order to have a fully redundant, highly-available OpenStack environment and to avoid performance bottlenecks. • For demonstration and study purposes, you can deploy OpenStack on VirtualBox; see Running Fuel on VirtualBox for more information. This option has the lowest hardware requirements • OpenStack can be deployed on smaller hardware configurations by combining multiple roles on the nodes and mapping multiple Logical Networks to a single physical NIC. This section provides information to help you decide how many nodes you need and which roles to assign to each. The absolute minimum requirement for a highly-available OpenStack deployment is to allocate 4 nodes: • 3 Controller nodes, combined with Storage • 1 Compute node In production environments, it is highly recommended to separate storage nodes from controllers. This helps avoid resource contention, isolates failure domains, and allows to optimize hardware configurations for specific workloads. To achieve that, you will need a minimum of 5 nodes when using Swift and Cinder storage backends, or 7 nodes for a fully redundant Ceph storage cluster: • 3 Controller nodes • 1 Cinder node or 3 Ceph OSD nodes • 1 Compute node Note You do not need Cinder storage nodes if you are using Ceph RBD as storage backend for Cinder volumes. Of course, you are free to choose how to deploy OpenStack based on the amount of available hardware and on your goals (such as whether you want a compute-oriented or storage-oriented environment). For a typical OpenStack compute deployment, you can use this table as high-level guidance to determine the number of controllers, compute, and storage nodes you should have: ©2014, Mirantis Inc. Page 9 Mirantis OpenStack v5.0 Planning Guide # of Nodes Nodes and Roles Controllers Computes Storages 4-10 3 1-7 3 (on controllers) 11-40 3 3-32 3+ (swift) + 2 (proxy) 41-100 4 29-88 6+ (swift) + 2 (proxy) >100 5 >84 9+ (swift) + 2 (proxy) ©2014, Mirantis Inc. Page 10 Mirantis OpenStack v5.0 Planning Guide Planning a Sahara Deployment Planning a Sahara Deployment When deploying an OpenStack Environment that includes Sahara for running Hadoop you need to consider a few special conditions. Floating IPs Fuel configures Sahara to use floating IPs to manage the VMs. This means that you must provide a Floating IP pool in each Node Group Template you define. A special case is if you are using Nova-Network and you have set the auto_assign_floating_ip parameter to true by checking the appropriate box on the Fuel UI. In this case, a floating IP is automatically assigned to each VM and the "floating ip pool" dropdown menu is hidden in the OpenStack Dashboard. In either case, Sahara assigns a floating IP to each VM it spawns so be sure to allocate enough floating IPs. Security Groups Sahara does not configure OpenStack Security Groups so you must manually configure the default security group in each tenant where Sahara will be used. See Ports Used by Sahara for a list of ports that need to be opened. VM Flavor Requirements Hadoop requires at least 1G of memory to run. That means you must use flavors that have at least 1G of memory for Hadoop cluster nodes. Communication between virtual machines Be sure that communication between virtual machines is not blocked. ©2014, Mirantis Inc. Page 11 Mirantis OpenStack v5.0 Planning Guide Preparing for vSphere Integration Preparing for vSphere Integration Fuel 5.0 and later can deploy a Mirantis OpenStack environment that boots and manages virtual machines in VMware vSphere. VMware provides a vCenter driver for OpenStack that enables the Nova-compute service to communicate with a VMware vCenter server that manages one or more ESX host clusters. The vCenter driver makes management convenient from both the OpenStack Dashboard (Horizon) and from vCenter, where advanced vSphere features can be accessed. This section summarizes the planning you should do and other steps that are required before you attempt to deploy Mirantis OpenStack with vCenter intergration. For more information: • See VMware vSphere Integration for information about how vCenter support is implemented in Mirantis OpenStack; • vSphere deployment notes gives instructions for creating and deploying a Mirantis OpenStack environment that is integrated with VMware vSphere. • For background information about VMware vSphere support in OpenStack, see the VMware vSphere OpenStack Manuals. • The official vSphere installation guide can be found here: vSphere Installation and Setup. vSphere Installation Before installing Fuel and using it to create a Mirantis OpenStack environment that is integrated with VMware vSphere, the vSphere installation must be up and running. Please check that you completed the following steps: • Install vSphere • Install vCenter • Install ESXi • Configure vCenter • Create DataCenter • Create vCenter cluster • Add ESXi host(s) ©2014, Mirantis Inc. Page 12 Mirantis OpenStack v5.0 Planning Guide ESXi Host Networks Configuration ESXi Host Networks Configuration The ESXi host(s) network must be configured appropriately in order to enable intergration of Mirantis OpenStack with vCenter. Follow the steps below: 1. Open the ESXi host page, select the "Manage" tab and click on "Networking". vSwitch0 and all its networks are shown. Click the Add Network button: 2. In the "Add Networking" wizard, select the Virtual Machine Port group: ©2014, Mirantis Inc. Page 13 Mirantis OpenStack v5.0 Planning Guide Limitations 3. On the next page, select the "Virtual Machine Port Group" option to ensure that the network will be created in vSwitch0: 4. Always name the network br100; this is the only value that works with Fuel; type a VLAN Tag in the VLAN ID field; (the value must be equal to the VLAN Tag at VM Fixed on Fuel’s Network settings tab): Limitations • Only Nova Network with flatDHCP mode is supported in the current version of the integration. ©2014, Mirantis Inc. Page 14 Mirantis OpenStack v5.0 Planning Guide Limitations • OpenStack Block Storage service (Cinder) with VMware VMDK datastore driver is not supported; you can only use Cinder with the LVM over iSCSI option as the Cinder backend. • Each OpenStack environment can support one vCenter cluster. • VMware vCenter can be deployed on Mirantis OpenStack with or without high-availability (HA) configured. Note, however, that the vCenter Nova plugin runs on only one Controller node, even if that Controller node is replicated to provide HA. See LP1312653. For background information about how vCenter support is integrated into Mirantis OpenStack, see VMware vSphere Integration. Follow the instructions in vSphere deployment notes to deploy your Mirantis OpenStack environment with vCenter support. ©2014, Mirantis Inc. Page 15 Mirantis OpenStack v5.0 Planning Guide Calculate hardware requirements Calculate hardware requirements You can use the Fuel Hardware Calculator to calculate the hardware required for your OpenStack environment. When choosing the hardware on which you will deploy your OpenStack environment, you should think about: • CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPU per virtual machine. Also consider how the environment will be used: environments used for heavy computational work may require more powerful CPUs than environments used primarily for storage, for example. • Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node. • Storage -- Depends on the local drive space per virtual machine, remote volumes that can be attached to a virtual machine, and object storage. • Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage. See Example of Hardware Requirements Calculation for some specific calculations you can make when choosing your hardware. ©2014, Mirantis Inc. Page 16 Mirantis OpenStack v5.0 Planning Guide Example of Hardware Requirements Calculation Example of Hardware Requirements Calculation When you calculate resources for your OpenStack environment, consider the resources required for expanding your environment. The example described in this section presumes that your environment has the following prerequisites: • 100 virtual machines • 2 x Amazon EC2 compute units 2 GHz average • 16 x Amazon EC2 compute units 16 GHz maximum Calculating CPU Use the following formula to calculate the number of CPU cores per virtual machine: max GHz /(number of GHz per core x 1.3 for hyper-threading) Example: 16 GHz / (2.4 x 1.3) = 5.12 Therefore, you must assign at least 5 CPU cores per virtual machine. Use the following formula to calculate the total number of CPU cores: (number of VMs x number of GHz per VM) / number of GHz per core Example: (100 VMs * 2 GHz per VM) / 2.4 GHz per core = 84 Therefore, the total number of CPU cores for 100 virtual machines is 84. Depending on the selected CPU you can calculate the required number of sockets. Use the following formula: total number of CPU cores / number of cores per socket For example, you use Intel E5 2650-70 8 core CPU: 84 / 8 = 11 Therefore, you need 11 sockets. To calculate the number of servers required for your deployment, use the following formula: total number of sockets / number of sockets per server ©2014, Mirantis Inc. Page 17 Mirantis OpenStack v5.0 Planning Guide Calculating Memory Round the number of sockets to an even number to get 12 sockets. Use the following formula: 12 / 2 = 6 Therefore, you need 6 dual socket servers. You can calculate the number of virtual machines per server using the following formula: number of virtual machines / number of servers Example: 100 / 6 = 16.6 Therefore, you can deploy 17 virtual machines per server. Using this calculation, you can add additional servers accounting for 17 virtual machines per server. The calculation presumes the following conditions: • No CPU oversubscribing • If you use hyper-threading, count each core as 1.3, not 2. • CPU supports the technologies required for your deployment Calculating Memory Continuing to use the example from the previous section, we need to determine how much RAM will be required to support 17 VMs per server. Let's assume that you need an average of 4 GBs of RAM per VM with dynamic allocation for up to 12GBs for each VM. Calculating that all VMs will be using 12 GBs of RAM requires that each server have 204 GBs of available RAM. You must also consider that the node itself needs sufficient RAM to accommodate core OS operations as well as RAM for each VM container (not the RAM allocated to each VM, but the memory the core OS uses to run the VM). The node's OS must run it's own operations, schedule processes, allocate dynamic resources, and handle network operations, so giving the node itself at least 16 GBs or more RAM is not unreasonable. Considering that the RAM we would consider for servers comes in 4 GB, 8 GB, 16 GB and 32 GB sticks, we would need a total of 256 GBs of RAM installed per server. For an average 2-CPU socket server board you get 16-24 RAM slots. To have 256 GBs installed you would need sixteen 16 GB sticks of RAM to satisfy your RAM needs for up to 17 VMs requiring dynamic allocation up to 12 GBs and to support all core OS requirements. You can adjust this calculation based on your needs. Calculating Storage When it comes to disk space there are several types that you need to consider: • Ephemeral (the local drive space for a VM) • Persistent (the remote volumes that can be attached to a VM) • Object Storage (such as images or other objects) ©2014, Mirantis Inc. Page 18 Mirantis OpenStack v5.0 Planning Guide Calculating Memory As far as local drive space that must reside on the compute nodes, in our example of 100 VMs we make the following assumptions: • 150 GB local storage per VM • 5 TB total of local storage (100 VMs * 50 GB per VM) • 500 GB of persistent volume storage per VM • 50 TB total persistent storage Returning to our already established example, we need to figure out how much storage to install per server. This storage will service the 17 VMs per server. If we are assuming 50 GBs of storage for each VMs drive container, then we would need to install 2.5 TBs of storage on the server. Since most servers have anywhere from 4 to 32 2.5" drive slots or 2 to 12 3.5" drive slots, depending on server form factor (i.e., 2U vs. 4U), you will need to consider how the storage will be impacted by the intended use. If storage impact is not expected to be significant, then you may consider using unified storage. For this example a single 3 TB drive would provide more than enough storage for seventeen 150 GB VMs. If speed is really not an issue, you might even consider installing two or three 3 TB drives and configure a RAID-1 or RAID-5 for redundancy. If speed is critical, however, you will likely want to have a single hardware drive for each VM. In this case you would likely look at a 3U form factor with 24-slots. Don't forget that you will also need drive space for the node itself, and don't forget to order the correct backplane that supports the drive configuration that meets your needs. Using our example specifications and assuming that speed is critical, a single server would need 18 drives, most likely 2.5" 15,000 RPM 146 GB SAS drives. Throughput As far as throughput, that's going to depend on what kind of storage you choose. In general, you calculate IOPS based on the packing density (drive IOPS * drives in the server / VMs per server), but the actual drive IOPS will depend on the drive technology you choose. For example: • 3.5" slow and cheap (100 IOPS per drive, with 2 mirrored drives) • 100 IOPS * 2 drives / 17 VMs per server = 12 Read IOPS, 6 Write IOPS • 2.5" 15K (200 IOPS, four 600 GB drive, RAID-10) • 200 IOPS * 4 drives / 17 VMs per server = 48 Read IOPS, 24 Write IOPS • SSD (40K IOPS, eight 300 GB drive, RAID-10) • 40K * 8 drives / 17 VMs per server = 19K Read IOPS, 9.5K Write IOPS Clearly, SSD gives you the best performance, but the difference in cost between SSDs and the less costly platter-based solutions is going to be significant, to say the least. The acceptable cost burden is determined by the balance between your budget and your performance and redundancy needs. It is also important to note that the rules for redundancy in a cloud environment are different than a traditional server installation in that entire servers provide redundancy as opposed to making a single server instance redundant. In other words, the weight for redundant components shifts from individual OS installation to server redundancy. It is far more critical to have redundant power supplies and hot-swappable CPUs and RAM than to have redundant compute node storage. If, for example, you have 18 drives installed on a server and have 17 drives directly allocated to each VM installed and one fails, you simply replace the drive and push a new node copy. The ©2014, Mirantis Inc. Page 19 Mirantis OpenStack v5.0 Planning Guide Calculating Network remaining VMs carry whatever additional load is present due to the temporary loss of one node. Remote storage IOPS will also be a factor in determining how you plan to handle persistent storage. For example, consider these options for laying out your 50 TB of remote volume space: • 12 drive storage frame using 3 TB 3.5" drives mirrored • 36 TB raw, or 18 TB usable space per 2U frame • 3 frames (50 TB / 18 TB per server) • 12 slots x 100 IOPS per drive = 1200 Read IOPS, 600 Write IOPS per frame • 3 frames x 1200 IOPS per frame / 100 VMs = 36 Read IOPS, 18 Write IOPS per VM • 24 drive storage frame using 1TB 7200 RPM 2.5" drives • 24 TB raw, or 12 TB usable space per 2U frame • 5 frames (50 TB / 12 TB per server) • 24 slots x 100 IOPS per drive = 2400 Read IOPS, 1200 Write IOPS per frame • 5 frames x 2400 IOPS per frame / 100 VMs = 120 Read IOPS, 60 Write IOPS per frame You can accomplish the same thing with a single 36 drive frame using 3 TB drives, but this becomes a single point of failure in your environment. Object storage When it comes to object storage, you will find that you need more space than you think. For example, this example specifies 50 TB of object storage. Object storage uses a default of 3 times the required space for replication, which means you will need 150 TB. However, to accommodate two hands-off zones, you will need 5 times the required space, which actually means 250 TB. The calculations don't end there. You don't ever want to run out of space, so "full" should really be more like 75% of capacity, which means you will need a total of 333 TB, or a multiplication factor of 6.66. Of course, that might be a bit much to start with; you might want to start with a happy medium of a multiplier of 4, then acquire more hardware as your drives begin to fill up. That calculates to 200 TB in our example. So how do you put that together? If you were to use 3 TB 3.5" drives, you could use a 12 drive storage frame, with 6 servers hosting 36 TB each (for a total of 216 TB). You could also use a 36 drive storage frame, with just 2 servers hosting 108 TB each, but its not recommended due to the high cost of failure to replication and capacity issues. Calculating Network Perhaps the most complex part of designing an OpenStack environment is the networking. An OpenStack environment can involve multiple networks even beyond the Public, Private, and Internal networks. Your environment may involve tenant networks, storage networks, multiple tenant private networks, and so on. Many of these will be VLANs, and all of them will need to be planned out in advance to avoid configuration issues. In terms of the example network, consider these assumptions: ©2014, Mirantis Inc. Page 20 Mirantis OpenStack v5.0 Planning Guide Summary • 100 Mbits/second per VM • HA architecture • Network Storage is not latency sensitive In order to achieve this, you can use two 1 Gb links per server (2 x 1000 Mbits/second / 17 VMs = 118 Mbits/second). Using two links also helps with HA. You can also increase throughput and decrease latency by using two 10 Gb links, bringing the bandwidth per VM to 1 Gb/second, but if you're going to do that, you've got one more factor to consider. Scalability and oversubscription It is one of the ironies of networking that 1 Gb Ethernet generally scales better than 10Gb Ethernet -- at least until 100 Gb switches are more commonly available. It's possible to aggregate the 1 Gb links in a 48 port switch, so that you have 48 x 1 Gb links down, but 4 x 10 Gb links up. Do the same thing with a 10 Gb switch, however, and you have 48 x 10 Gb links down and 4 x 100b links up, resulting in oversubscription. Like many other issues in OpenStack, you can avoid this problem to a great extent with careful planning. Problems only arise when you are moving between racks, so plan to create "pods", each of which includes both storage and compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain. Hardware for this example In this example, you are looking at: • 2 data switches (for HA), each with a minimum of 12 ports for data (2 x 1 Gb links per server x 6 servers) • 1 x 1 Gb switch for IPMI (1 port per server x 6 servers) • Optional Cluster Management switch, plus a second for HA Because your network will in all likelihood grow, it's best to choose 48 port switches. Also, as your network grows, you will need to consider uplinks and aggregation switches. Summary In general, your best bet is to choose a 2 socket server with a balance in I/O, CPU, Memory, and Disk that meets your project requirements. Look for a 1U R-class or 2U high density C-class servers. Some good options from Dell for compute nodes include: • Dell PowerEdge R620 • Dell PowerEdge C6220 Rack Server • Dell PowerEdge R720XD (for high disk or IOPS requirements) You may also want to consider systems from HP (http://www.hp.com/servers) or from a smaller systems builder like Aberdeen, a manufacturer that specializes in powerful, low-cost systems and storage servers (http://www.aberdeeninc.com). ©2014, Mirantis Inc. Page 21 Mirantis OpenStack v5.0 Planning Guide Reference configuration of hardware switches Reference configuration of hardware switches This section describes reference configuration for Cisco and Juniper network switches. Tagged ports Cisco Catalyst interface [Ten]GigabitEthernet[interface number]   description [port description]   switchport trunk encapsulation dot1q   switchport trunk allowed vlan [vlan IDs for specific networks]   switchport mode trunk   spanning-tree portfast trunk  switchport trunk native vlan [vlan ID] - if necessary one untagged VLAN Cisco Nexus/ Arista interface ethernet[interface number] description [port description]   switchport   switchport mode trunk           switchport trunk allowed vlan [vlan IDs for specific networks]          switchport trunk native vlan [vlan ID] - if necessary one untagged VLAN Juniper interfaces {        [interface_name]-[interface_number] {           unit 0 {               family ethernet-switching {                   port-mode trunk;                   vlan {                       members [ vlan IDs or names of specific networks];                        }                native-vlan-id [vlan ID] if necessary one untagged VLAN               }           }        } } Untagged ports Cisco Catalyst ©2014, Mirantis Inc. Page 22 Mirantis OpenStack v5.0 Planning Guide Reference configuration of hardware switches interface [Ten]GigabitEthernet[interface number]   description [port description]   switchport access [vlan ID for specific network]   switchport mode access spanning-tree portfast Cisco Nexus/Arista interface ethernet[interface number] description [port description] switchport        switchport access vlan [vlan ID for specific network] Juniper interfaces {         [interface_name]-[interface_number] {           unit 0 {               family ethernet-switching {                      port-mode access;                vlan {                       members [vlan ID or name for specific network];                        }               }           }        } } ©2014, Mirantis Inc. Page 23 Mirantis OpenStack v5.0 Planning Guide Example 1: HA + Nova-network FlatDHCP manager Example 1: HA + Nova-network FlatDHCP manager As a model example, the following configuration is used: • Deployment mode: Multi-node HA • Networking model: Nova-network FlatDHCP manager Hardware and environment: • 7 servers with two 1Gb/s Ethernet NIC and IPMI • 1 Cisco Catalyst 2960G switch • Independent out of band management network for IPMI • Connection to the Internet or/and DC network via a router called Gateway and IP 172.16.1.1 Node Server roles: • 1 server as Fuel Node • 3 servers as Controller Node • 1 server as Cinder Node • 2 servers as Compute Node Network configuration plan: • Public network 172.16.1.0/24 • Floating network 172.16.0.0/24 in VLAN 100 • Management network 192.168.0.0/24 in VLAN 101 • Storage network 192.168.1.0/24 in VLAN 102 • Private (Fixed) network 10.0.0.0/24 in VLAN 103 • Administrative network (for Fuel) 10.20.0.0/24 in VLAN 104 Network Parameters: • Fuel server IP: 10.20.0.2/24 • Default gateway: 10.20.0.1 • DNS 10.20.0.1 Note The Internet and rest of DC access is available through the Public network (for OpenStack Nodes) and Administrative network (Fuel server) From the server node side, ports with the following VLAN IDs for networks are used: • eth0 - Management VLAN 101 (tagged), Storage VLAN 102(tagged) and Administrative VLAN 104 (untagged) ©2014, Mirantis Inc. Page 24 Mirantis OpenStack v5.0 Planning Guide Detailed Port Configuration • eth1 - Public/Floating VLAN 100 (tagged), Private VLAN 103 (tagged) Detailed Port Configuration The following table describes the detailed port configuration and VLAN assignment. Switch Port Server name Server NIC tagged / untagged VLAN ID G0/1 Fuel eth0 untagged 104 G0/2 Fuel eth1 untagged 100 G0/3 Compute Node 1 eth0 tagged 101, 102, 104 (untagged) G0/4 Compute Node 1 eth1 tagged 100, 103 G0/5 Compute Node n eth0 tagged 101, 102, 104 (untagged) G0/6 Compute Node n eth1 tagged 100, 103 G0/7 Controller Node 1 eth0 tagged 101, 102, 104(untagged) G0/8 Controller Node 1 eth1 tagged 100, 103 G0/9 Controller Node 2 eth0 tagged 101, 102, 104 (untagged) G0/10 Controller Node 2 eth1 tagged 100, 103 G0/11 Controller Node 3 eth0 tagged 101, 102, 104 (untagged) G0/12 Controller Node 3 eth1 tagged 100, 103 G0/13 Cinder Node eth0 tagged 101, 102, 104 (untagged) G0/14 Cinder Node eth1 tagged 100, 103 G0/24 Router (default gateway) --- untagged 100 Connect the servers to the switch as in the diagram below: ©2014, Mirantis Inc. Page 25 Mirantis OpenStack v5.0 Planning Guide Detailed Port Configuration The following diagram describes the network topology for this environment. ©2014, Mirantis Inc. Page 26 Mirantis OpenStack v5.0 Planning Guide Nova-network Switch configuration (Cisco Catalyst 2960G) Nova-network Switch configuration (Cisco Catalyst 2960G) Use the following configuration to deploy Mirantis OpenStack using a Cisco Catalyst 2960G network switch.: service timestamps debug datetime msec localtime show-timezone service timestamps log datetime msec localtime show-timezone service password-encryption service sequence-numbers ! hostname OpenStack\_sw1 ! logging count logging buffered 64000 informational logging rate-limit console 100 except errors logging console informational enable secret r00tme ! username root privilege 15 secret r00tme ! no aaa new-model aaa session-id common ip subnet-zero ip domain-name domain.ltd ip name-server [ip of domain name server] ! ©2014, Mirantis Inc. Page 27 Mirantis OpenStack v5.0 Planning Guide Nova-network Switch configuration (Cisco Catalyst 2960G) spanning-tree mode rapid-pvst spanning-tree loopguard default spanning-tree etherchannel guard misconfig spanning-tree extend system-id ! ip ssh time-out 60 ip ssh authentication-retries 2 ip ssh version 2 ! vlan 100  name Public vlan 101  name Management vlan 102 name Storage vlan 103  name Private vlan 104 name Admin ! interface GigabitEthernet0/1 description Fuel Node eth0 switchport access vlan 104 switchport mode access spanning-tree portfast ! interface GigabitEthernet0/2 description Fuel Node eth1 (optional to have direct access to Public net) switchport access vlan 100 switchport mode access spanning-tree portfast interface GigabitEthernet0/3 description Compute Node 1 eth0 switchport trunk native vlan 104 switchport trunk encapsulation dot1q switchport trunk allowed vlan 101, 102, 104 switchport mode trunk spanning-tree portfast trunk interface GigabitEthernet0/4 description Compute Node 1 eth1 switchport trunk encapsulation dot1q switchport trunk allowed vlan 100, 103 switchport mode trunk spanning-tree portfast trunk interface GigabitEthernet0/5 description Compute Node 2 eth0 switchport trunk native vlan 104 switchport trunk encapsulation dot1q ©2014, Mirantis Inc. Page 28 Mirantis OpenStack v5.0 Planning Guide switchport trunk allowed vlan 101, 102, switchport mode trunk spanning-tree portfast trunk interface GigabitEthernet0/6 description Compute Node 2 eth1 switchport trunk encapsulation dot1q switchport trunk allowed vlan 100, 103 switchport mode trunk spanning-tree portfast trunk interface GigabitEthernet0/7 description Controller Node 1 eth0 switchport trunk native vlan 104 switchport trunk encapsulation dot1q switchport trunk allowed vlan 101, 102, switchport mode trunk spanning-tree portfast trunk interface GigabitEthernet0/8 description Controller Node 1 eth1 switchport trunk encapsulation dot1q switchport trunk allowed vlan 100, 103 switchport mode trunk spanning-tree portfast trunk interface GigabitEthernet0/9 description Controller Node 2 eth0 switchport trunk native vlan 104 switchport trunk encapsulation dot1q switchport trunk allowed vlan 101, 102, switchport mode trunk spanning-tree portfast trunk interface GigabitEthernet0/10 description Controller Node 2 eth1 switchport trunk encapsulation dot1 switchport trunk allowed vlan 100, 103 switchport mode trunk spanning-tree portfast trunk interface GigabitEthernet0/11 description Controller Node 3 eth0 switchport trunk native vlan 104 switchport trunk encapsulation dot1q switchport trunk allowed vlan 101, 102, switchport mode trunk spanning-tree portfast trunk interface GigabitEthernet0/12 description Controller Node 3 eth1 switchport trunk encapsulation dot1q switchport trunk allowed vlan 100, 103 switchport mode trunk spanning-tree portfast trunk ©2014, Mirantis Inc. Nova-network Switch configuration (Cisco Catalyst 2960G) 104 104 104 104 Page 29 Mirantis OpenStack v5.0 Planning Guide Nova-network Switch configuration (Juniper EX4200) interface GigabitEthernet0/13 description Cinder Node eth0 switchport trunk native vlan 104 switchport trunk encapsulation dot1q switchport trunk allowed vlan 101, 102, 104 switchport mode trunk spanning-tree portfast trunk interface GigabitEthernet0/14 description Cinder Node eth1 switchport trunk encapsulation dot1q switchport trunk allowed vlan 100, 103 switchport mode trunk spanning-tree portfast trunk interface GigabitEthernet0/24 description Connection to default gateway switchport access vlan 100 switchport mode access ! interface Vlan100 ip address 172.16.1.254 255.255.255.0 ip address 172.16.0.254 255.255.255.0 secondary no shutdown ! ip route 0.0.0.0 0.0.0.0 172.16.1.1 ! ip classless no ip http server no ip http secure-server ! line con 0 session-timeout 15 privilege level 15 login local password r00tme ! line vty 0 15 session-timeout 15 login local password r00tme ! ntp server [ntp_server1] prefer ntp server [ntp_server2] Nova-network Switch configuration (Juniper EX4200) Use the following configuration to deploy Mirantis OpenStack using Juniper EX4200 network switch.: ©2014, Mirantis Inc. Page 30 Mirantis OpenStack v5.0 Planning Guide Nova-network Switch configuration (Juniper EX4200) system { host-name OpenStack_sw1; domain-name domain.ltd; authentication-order [ password ]; root-authentication { encrypted-password "xxxxxxxxxxxxxxxxxxx"; } } services { ssh; } ntp { server [ntp_server1] prefer; server [ntp_server2]; } } interfaces { ge-0/0/0 { description Fuel Node eth0; unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_104; } } } } ge-0/0/1 { description Fuel Node eth1 (optional to have direct access to Public net); unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_100; } } } } ge-0/0/2 { description Compute Node 1 eth0; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_101, vlan_102; ©2014, Mirantis Inc. Page 31 Mirantis OpenStack v5.0 Planning Guide Nova-network Switch configuration (Juniper EX4200) } native-vlan-id vlan_104; } } } ge-0/0/3 { description Compute Node 1 eth1; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_100, vlan_103; } } } } ge-0/0/4 { description Compute Node 2 eth0; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_101, vlan_102; } native-vlan-id vlan_104; } } } ge-0/0/5 { description Compute Node 2 eth1; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_100, vlan_103; } } } } ge-0/0/6 { description Controller Node 1 eth0; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_101, vlan_102; } native-vlan-id vlan_104; ©2014, Mirantis Inc. Page 32 Mirantis OpenStack v5.0 Planning Guide Nova-network Switch configuration (Juniper EX4200) } } } ge-0/0/7 { description controller Node 1 eth1; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_100, vlan_103; } } } } ge-0/0/8 { description Controller Node 2 eth0; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_101, vlan_102; } native-vlan-id vlan_104; } } } ge-0/0/9 { description Controller Node 2 eth1; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_100, vlan_103; } } } } ge-0/0/10 { description Controller Node 3 eth0; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_101, vlan_102; } native-vlan-id vlan_104; } } ©2014, Mirantis Inc. Page 33 Mirantis OpenStack v5.0 Planning Guide Nova-network Switch configuration (Juniper EX4200) } ge-0/0/11 { description Controller Node 3 eth1; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_100, vlan_103; } } } } ge-0/0/12 { description Cinder Node 1 eth0; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_101, vlan_102; } native-vlan-id vlan_104; } } } ge-0/0/13 { description Cinder Node 1 eth1; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_100, vlan_103; } } } } ge-0/0/23 { description Connection to default gateway; unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_100; } } } } vlan { unit 100 { ©2014, Mirantis Inc. Page 34 Mirantis OpenStack v5.0 Planning Guide Nova-network Switch configuration (Juniper EX4200) family inet { address 172.16.1.254/24; address 172.16.0.254/24; } } } } routing-options { static { route 0.0.0.0/0 next-hop 172.16.1.1; } } protocols { dcbx { interface all; } rstp { bridge-priority 32k; interface ge-0/0/0.0 { edge; } interface ge-0/0/1.0 { edge; } interface ge-0/0/23.0 { edge; } bpdu-block-on-edge; } lldp { interface all; } } vlans { vlan_1; vlan_100 { description Public; vlan-id 100; l3-interface vlan.100; } vlan_101 { description Management; vlan-id 101; } vlan_102 { description Storage; vlan-id 102; } ©2014, Mirantis Inc. Page 35 Mirantis OpenStack v5.0 Planning Guide Nova-network Switch configuration (Juniper EX4200) vlan_103 { description Private; vlan-id 103; } vlan_104 { description Admin; vlan-id 104; } } ©2014, Mirantis Inc. Page 36 Mirantis OpenStack v5.0 Planning Guide Example 2: HA + Neutron with GRE Example 2: HA + Neutron with GRE As a model example, the following configuration is used: • Deploying mode: Multi-node HA • Networking model: Neutron with GRE Hardware and environment: • 7 servers with two 1Gb/s ethernet NIC and IPMI • 1 Cisco Catalyst 3750 switch • Independent out of band management network for IPMI • Connection to the Internet or/and DC network via a router called Gateway and IP 172.16.1.1 Node servers roles: • 1 server as Fuel Node • 3 servers as Controller Node • 1 server as Cinder Node • 2 servers as Compute Node Network Configuration Plan: • Floating/Public network 172.16.0.0/24 in VLAN 100 (untagged on servers) • Floating IP range 172.16.0.130 - 254 • Internal network (private) 192.168.111.0/24 • Gateway 192.168.111.1 • DNS 8.8.4.4, 8.8.8.8 • Tunnel ID range 2 - 65535 • Management network 192.168.0.0/24 in VLAN 101 • Storage network 192.168.1.0/24 in VLAN 102 • Administrative network (for Fuel) 10.20.0.0/24 in VLAN 103 Network Parameters • Fuel server: IP 10.20.0.2/24 • Default gateway: 10.20.0.1 • DNS: 10.20.0.1 Note The Internet and rest of DC access via Public network (for OpenStack Nodes) and Administrative network (Fuel server). ©2014, Mirantis Inc. Page 37 Mirantis OpenStack v5.0 Planning Guide Detailed port configuration From server side, ports with following VLAN IDs are used: • eth0 - Administrative VLAN 103 (untagged) • eth1 - Public/Floating VLAN 100 (untagged), Management VLAN 101 (tagged), Storage VLAN 102 (tagged) Detailed port configuration The following table describes port configuration for this deployment. Switch Port Server name Server NIC tagged / untagged VLAN ID G0/1 Fuel eth0 untagged 103 G0/2 Fuel eth1 untagged 100 G0/3 Compute Node 1 eth0 untagged 103 G0/4 Compute Node 1 eth1 tagged 100(untagged), 101, 102 G0/5 Compute Node n eth0 tagged 103 G0/6 Compute Node n eth1 tagged 100(untagged), 101, 102 G0/7 Controller Node 1 eth0 tagged 103 G0/8 Controller Node 1 eth1 tagged 100(untagged), 101, 102 G0/9 Controller Node 2 eth0 tagged 103 G0/10 Controller Node 2 eth1 tagged 100(untagged), 101, 102 G0/11 Controller Node 3 eth0 tagged 103 G0/12 Controller Node 3 eth1 tagged 100(untagged), 101, 102 G0/13 Cinder Node eth0 tagged 103 G0/14 Cinder Node eth1 tagged 100(untagged), 101, 102 G0/24 Router (default gateway) untagged 100 •  Neutron Switch configuration (Cisco Catalyst 2960G) Use the following configuration to deploy Mirantis OpenStack using a Cisco Catalyst 2960G network switch. service timestamps debug datetime msec localtime show-timezone service timestamps log datetime msec localtime show-timezone service password-encryption service sequence-numbers ! hostname OpenStack_sw1 ! logging count ©2014, Mirantis Inc. Page 38 Mirantis OpenStack v5.0 Planning Guide Detailed port configuration logging buffered 64000 informational logging rate-limit console 100 except errors logging console informational enable secret r00tme ! username root privilege 15 secret r00tme ! no aaa new-model aaa session-id common ip subnet-zero ip domain-name domain.ltd ip name-server [ip of domain name server] ! spanning-tree mode rapid-pvst spanning-tree loopguard default spanning-tree etherchannel guard misconfig spanning-tree extend system-id ! ip ssh time-out 60 ip ssh authentication-retries 2 ip ssh version 2 ! vlan 100 name Public vlan 101 name Management vlan 102 name Storage vlan 103 name Admin ! interface GigabitEthernet0/1 description Fuel Node eth0 switchport access vlan 103 switchport mode access spanning-tree portfast ! interface GigabitEthernet0/2 description Fuel Node eth1 (optional to have direct access to Public net) switchport access vlan 100 switchport mode access spanning-tree portfast ! interface GigabitEthernet0/3 description Compute Node 1 eth0 switchport access vlan 103 switchport mode access spanning-tree portfast ©2014, Mirantis Inc. Page 39 Mirantis OpenStack v5.0 Planning Guide ! interface GigabitEthernet0/4 description Compute Node 1 eth1 switchport trunk native vlan 100 switchport trunk encapsulation dot1q switchport trunk allowed vlan 100, 101 switchport mode trunk spanning-tree portfast trunk ! interface GigabitEthernet0/5 description Compute Node 2 eth0 switchport access vlan 103 switchport mode access spanning-tree portfast ! interface GigabitEthernet0/6 description Compute Node 2 eth1 switchport trunk native vlan 100 switchport trunk encapsulation dot1q switchport trunk allowed vlan 100, 101 switchport mode trunk spanning-tree portfast trunk ! interface GigabitEthernet0/7 description Controller Node 1 eth0 switchport access vlan 103 switchport mode access spanning-tree portfast ! interface GigabitEthernet0/8 description Controller Node 1 eth1 switchport trunk native vlan 100 switchport trunk encapsulation dot1q switchport trunk allowed vlan 100, 101 switchport mode trunk spanning-tree portfast trunk ! interface GigabitEthernet0/9 description Controller Node 2 eth0 switchport access vlan 103 switchport mode access spanning-tree portfast ! interface GigabitEthernet0/10 description Controller Node 2 eth1 switchport trunk native vlan 100 switchport trunk encapsulation dot1q switchport trunk allowed vlan 100, 101 ©2014, Mirantis Inc. Detailed port configuration 102 102 102 102 Page 40 Mirantis OpenStack v5.0 Planning Guide Detailed port configuration switchport mode trunk spanning-tree portfast trunk ! interface GigabitEthernet0/11 description Controller Node 3 eth0 switchport access vlan 103 switchport mode access spanning-tree portfast ! interface GigabitEthernet0/12 description Controller Node 3 eth1 switchport trunk native vlan 100 switchport trunk encapsulation dot1q switchport trunk allowed vlan 100, 101 102 switchport mode trunk spanning-tree portfast trunk ! interface GigabitEthernet0/13 description Cinder Node eth0 switchport access vlan 103 switchport mode access spanning-tree portfast ! interface GigabitEthernet0/14 description Cinder Node eth1 switchport trunk native vlan 100 switchport trunk encapsulation dot1q switchport trunk allowed vlan 100, 101 102 switchport mode trunk spanning-tree portfast trunk ! interface GigabitEthernet0/24 description Connection to default gateway switchport access vlan 100 switchport mode access ! interface Vlan100 ip address 172.16.1.254 255.255.255.0 ip address 172.16.0.254 255.255.255.0 secondary no shutdown ! ip route 0.0.0.0 0.0.0.0 172.16.1.1 ! ip classless no ip http server no ip http secure-server ! line con 0 ©2014, Mirantis Inc. Page 41 Mirantis OpenStack v5.0 Planning Guide Neutron Switch configuration (Juniper EX4200) session-timeout 15 privilege level 15 login local password r00tme ! line vty 0 15 session-timeout 15 login local password r00tme ! ntp server [ntp_server1] prefer ntp server [ntp_server2] Neutron Switch configuration (Juniper EX4200) Use the following configuration to deploy Mirantis OpenStack using Juniper EX4200 network switch. system { host-name OpenStack_sw1; domain-name domain.ltd; authentication-order [ password ]; root-authentication { encrypted-password "xxxxxxxxxxxxxxxxxxx"; } } services { ssh; } ntp { server [ntp_server1] prefer; server [ntp_server2]; } } interfaces { ge-0/0/0 { description Fuel Node eth0; unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_103; } } } } ge-0/0/1 { ©2014, Mirantis Inc. Page 42 Mirantis OpenStack v5.0 Planning Guide Neutron Switch configuration (Juniper EX4200) description Fuel Node eth1 (optional to have direct access to Public net); unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_100; } } } } ge-0/0/2 { description Compute Node 1 eth0; unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_103; } } } } ge-0/0/3 { description Compute Node 1 eth1; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_101, vlan_102; } native-vlan-id vlan_100; } } } ge-0/0/4 { description Compute Node 2 eth0; unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_103; } } } } ge-0/0/5 { description Compute Node 2 eth1; unit 0 { ©2014, Mirantis Inc. Page 43 Mirantis OpenStack v5.0 Planning Guide Neutron Switch configuration (Juniper EX4200) family ethernet-switching { port-mode trunk; vlan { members vlan_101, vlan_102; } native-vlan-id vlan_100; } } } ge-0/0/6 { description Controller Node 1 eth0; unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_103; } } } } ge-0/0/7 { description controller Node 1 eth1; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_101, vlan_102; } native-vlan-id vlan_100; } } } ge-0/0/8 { description Controller Node 2 eth0; unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_103; } } } } ge-0/0/9 { description Controller Node 2 eth1; unit 0 { family ethernet-switching { port-mode trunk; ©2014, Mirantis Inc. Page 44 Mirantis OpenStack v5.0 Planning Guide Neutron Switch configuration (Juniper EX4200) vlan { members vlan_101, vlan_102; } native-vlan-id vlan_100; } } } ge-0/0/10 { description Controller Node 3 eth0; unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_103; } } } } ge-0/0/11 { description Controller Node 3 eth1; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_101, vlan_102; } native-vlan-id vlan_100; } } } ge-0/0/12 { description Cinder Node 1 eth0; unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_103; } } } } ge-0/0/13 { description Cinder Node 1 eth1; unit 0 { family ethernet-switching { port-mode trunk; vlan { members vlan_101, vlan_102; ©2014, Mirantis Inc. Page 45 Mirantis OpenStack v5.0 Planning Guide Neutron Switch configuration (Juniper EX4200) } native-vlan-id vlan_100; } } } ge-0/0/23 { description Connection to default gateway; unit 0 { family ethernet-switching { port-mode access; vlan { members vlan_100; } } } } vlan { unit 100 { family inet { address 172.16.1.254/24; address 172.16.0.254/24; } } } } routing-options { static { route 0.0.0.0/0 next-hop 172.16.1.1; } } protocols { dcbx { interface all; } rstp { bridge-priority 32k; interface ge-0/0/0.0 { edge; } interface ge-0/0/1.0 { edge; } interface ge-0/0/2.0 { edge; } interface ge-0/0/4.0 { edge; ©2014, Mirantis Inc. Page 46 Mirantis OpenStack v5.0 Planning Guide Neutron Switch configuration (Juniper EX4200) } interface ge-0/0/6.0 { edge; } interface ge-0/0/8.0 { edge; } interface ge-0/0/10.0 { edge; } interface ge-0/0/12.0 { edge; } interface ge-0/0/23.0 { edge; } bpdu-block-on-edge; } lldp { interface all; } } vlans { vlan_1; vlan_100 { description Public; vlan-id 100; l3-interface vlan.100; } vlan_101 { description Management; vlan-id 101; } vlan_102 { description Storage; vlan-id 102; } vlan_103 { description Admin; vlan-id 103; } } ©2014, Mirantis Inc. Page 47 Mirantis OpenStack v5.0 Planning Guide Index Index P Preparing for the Mirantis OpenStack Deployment S System Requirements ©2014, Mirantis Inc. Page 49