Introduction
|
Oracle Solaris 11.2 provides a complete OpenStack distribution. OpenStack, the popular open source cloud computing software enjoying widespread industry involvement, provides comprehensive self-service environments for sharing and managing compute, network, and storage resources in the data center through a centralized web-based portal. It is integrated into all the core technology foundations of Oracle Solaris 11, so you can now set up an enterprise-ready private cloud infrastructure as a service (IaaS) environment in minutes.
Using OpenStack with Oracle Solaris provides the following advantages:
- Scalable, mature, and industry-proven hypervisor. Oracle Solaris Zones offer significantly lower virtualization overhead making them a perfect fit for OpenStack compute resources. Oracle Solaris Kernel Zones also provide independent kernel versions without compromise, allowing independent patch versions.
- Secure and compliant application provisioning. The new Unified Archive feature of Oracle Solaris 11.2 enables rapid application deployment in the cloud via a new archive format that enables portability between bare-metal systems and virtualized systems. Instant cloning in the cloud enables you to scale out and to reliably deal with disaster recovery emergencies. The Unified Archive feature, combined with capabilities such as Immutable Zones for read-only virtualization and the new Oracle Solaris compliance framework, enable administrators to ensure end-to-end integrity and can significantly reduce the ongoing cost of compliance.
- Fast, fail-proof cloud updates. Oracle Solaris makes updating OpenStack an easy and fail-proof process, updating a full cloud environment in less than twenty minutes. Through integration with the Oracle Solaris Image Packaging System, ZFS boot environments ensure quick rollback in case anything goes wrong, allowing administrators to quickly get back up and running.
- Application-driven software-defined networking. Taking advantage of Oracle Solaris network virtualization capabilities, applications can now drive their own behavior for prioritizing network traffic across the cloud. Combined with the new Elastic Virtual Switch in Oracle Solaris 11.2, administrators have complete flexibility and a single point of control for virtualized environments across their cloud environment.
- Single-vendor solution. Oracle is the #1 enterprise vendor offering a full-stack solution that provides the ability to get end-to-end support from a single vendor for database as a service (DaaS), platform as a service (PaaS) or—more simply—IaaS, saving significant heartache and cost.
Figure 1. The points of integration between Oracle Solaris and OpenStack
In Oracle Solaris 11.2, the OpenStack Havana 2013.2.3 release is available through the Oracle Solaris Image Packaging System's package repository. Using the available packages, you can deploy any of the following OpenStack services on the system, which are tightly integrated with the rest of Oracle Solaris:
- Nova—compute virtualization using Oracle Solaris Non-Global Zones as well as the new Oracle Solaris Kernel Zones.
- Neutron—network virtualization through the use of the new Elastic Virtual Switch capability in Oracle Solaris 11.2.
- Cinder—block storage virtualization using ZFS. Block storage volumes can be made available to local compute nodes or they can be made available remotely via iSCSI or Fibre Channel.
- Glance—image virtualization using the new Unified Archive feature of Oracle Solaris 11.2.
- Horizon—the standard OpenStack dashboard where the cloud infrastructure can be managed.
- Keystone—the standard OpenStack authentication service.
- Swift—redundant and scalable object storage virtualization using ZFS.
This document is not meant to be an exhaustive source of information on OpenStack but rather one focused on OpenStack with Oracle Solaris 11.2. Additional information can be found in the OpenStack documentation, which is available at openstack.org. Additional information about OpenStack on Oracle Solaris can be found on the OpenStack Java.net project page and via the project's mailing lists.
OpenStack on Oracle Solaris 11.2 does not have any special system requirements other than those spelled out for Oracle Solaris itself. Additional CPUs, memory, and disk space might be required, however, to support more than a trivial number of Nova instances. For information about general system requirements, see "Oracle Solaris 11.2 System Requirements."
Using the Unified Archive Method for Installation
The easiest way to start using OpenStack on Oracle Solaris is to download and install the Oracle Solaris 11.2 with OpenStack Unified Archive, which provides a convenient way of getting started with OpenStack in about ten minutes. All seven of the essential OpenStack services are preinstalled and preconfigured to make setting up OpenStack on a single system easy.
After installation and a small amount of customization, virtual machines (VMs), otherwise known as Nova instances, can be created, assigned block storage, attached to virtual networks, and then managed through an easy-to-use web browser interface.
The Unified Archive is preloaded with a pair of Glance images, one suitable for use with non-global zones and the other for kernel zones (
solaris-kz
branded zones). In addition, through the use of the new archiveadm
(1M) command, new archives can be created from global, non-global, and kernel zones running Oracle Solaris 11.2 and then uploaded to the Glance repository for use with OpenStack.
In order to use the Unified Archive method of installation, a suitable target is necessary. This is typically a bare-metal system that can be installed via the Automated Installer, or it can be a kernel zone. Although the Unified Archive can, in theory, be installed inside a non-global zone, the Nova compute virtualization in Oracle Solaris does not support nested non-global zones. As such, using the manual package-based installation method is recommended for those deployments. Services that would be suitable to make available within non-global zones include Keystone, Glance, and Horizon.
Detailed instructions for both methods of installation are included in the
README
file associated with the archive. Refer to that for more detailed information, but briefly, the Unified Archive can be deployed using a variety of methods:- Bare-metal installation using an Automated Installer network service
- Bare-metal installation using a USB image generated from the Unified Archive using
archiveadm
(1M) - Indirect installation using the Oracle Solaris Automated Installer Boot Image combined with the Unified Archive
- Direct installation into a kernel zone using the standard
zonecfg
(1M) andzoneadm
(1M) commands
The first two methods are most useful for doing direct system installations, while the third method can be used to install the archive into an Oracle VM VirtualBox instance. Finally, the last method allows you to create a kernel zone installed with the archive using just two commands.
Using the last method, to install the Unified Archive within a kernel zone, simply create a new kernel zone and then supply the path to the downloaded archive as part of the zone installation command, for example:
global# zonecfg -z openstack create -t SYSsolaris-kz global# zoneadm -z openstack install -a /path/to/downloaded/archive.uar
At this point, the archive will be installed inside a new kernel zone named
openstack
. To get started, the new zone should be booted and configured through the zone's console:global# zoneadm -z openstack boot global# zlogin -C openstack
If nothing appears on the console immediately, press either Enter or Control-L to redraw the screen.
Once the new system has been installed, booted, and then configured, the Elastic Virtual Switch should be configured. This primarily consists of creating a set of public SSH keys for the
root
, evsuser
, and neutron
UNIX users and then appending those public keys to the authorized_keys
file for the evsuser
UNIX user: /var/user/evsuser/.ssh/authorized_keys
.
The Elastic Virtual Switch requires some additional configuration, such as which sort of virtual LAN technology to use (VLAN or VXLAN) and the corresponding IDs or segments. To ease the automation of creating these keys and performing the configuration when using the Unified Archive, a script is supplied under
/usr/demo/openstack/configure_evs.py
that can be used to finalize the rest of the OpenStack and the Elastic Virtual Switch configuration.
Note that when using the Unified Archive installation method, the default Horizon instance is not enabled with Transport Layer Security (TLS). To enable TLS in this configuration, uncomment the following lines in
/etc/openstack_dashboard/local_settings.py
:SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') CSRF_COOKIE_SECURE = True SESSION_COOKIE_SECURE = True from horizon.utils import secret_key SECRET_KEY = secret_key.generate_or_read_from_file(os.path.join(LOCAL_PATH, '.secret_key_store'))
In addition, X.509 certificates need to be installed and the Apache configuration for Horizon needs to be adjusted to account for that. If self-signed certificates are going to be used, you might also want to uncomment the
OPENSTACK_SSL_NO_VERIFY
parameter in/etc/openstack_dashboard/local_settings.py
. The Horizon configuration step in the next section has additional details about the certificate configuration.Using the Manual Package Installation Method
An alternate method of installation, which is useful for doing multi-system configurations, is to install the OpenStack packages yourself. This installation method also takes roughly ten minutes to complete, although the time for configuration will vary depending on the services deployed.
One advantage of this method is that it allows an administrator to install only the OpenStack services necessary on each specific node. Installation can be done manually using the
pkg
(1) command or by specifying the desired packages through the use of an Oracle Solaris Automated Installer manifest using a network-based installation, as outlined in "Installing Using an Install Server."
Table 1 shows the packages that are included at this time.
Table 1. Included PackagesPackage Name | Package Description |
---|---|
pkg:/cloud/openstack/cinder | OpenStack Cinder provides an infrastructure for managing block storage volumes in OpenStack. It allows block devices to be exposed and connected to compute instances for expanded storage, better performance, and integration with enterprise storage platforms. |
pkg:/cloud/openstack/glance | OpenStack Glance provides services for discovering, registering, and retrieving virtual machine images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image. VM images made available through Glance can be stored in a variety of locations from simple file systems to object-storage systems such as OpenStack Swift. |
pkg:/cloud/openstack/horizon | OpenStack Horizon is the canonical implementation of OpenStack's dashboard, which provides a web-based user interface to OpenStack services including Nova, Swift, Keystone, and so on. |
pkg:/cloud/openstack/keystone | OpenStack Keystone is the OpenStack identity service used for authentication between the OpenStack services. |
pkg:/cloud/openstack/neutron | OpenStack Neutron provides an API to dynamically request and configure virtual networks. These networks connect "interfaces" from other OpenStack services (for example, VNICs from Nova VMs). The Neutron API supports extensions to provide advanced network capabilities, for example, quality of service (QoS), access control lists (ACLs), network monitoring, and so on. |
pkg:/cloud/openstack/nova | OpenStack Nova provides a cloud computing fabric controller that supports a wide variety of virtualization technologies. In addition to its native API, it includes compatibility with the commonly encountered Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3) APIs. |
pkg:/cloud/openstack/swift | OpenStack Swift provides object storage services for projects and users in the cloud. |
In addition, as a convenience, the group package,
pkg:/cloud/openstack
, can be installed in a similar manner to automatically install all seven components.
The Oracle Solaris Image Packaging System packages listed in Table 1 can be installed individually or as a group on one or more systems. Once installed, some configuration steps are required to get started. See "Appendix A: Common Configuration Parameters for OpenStack" for more information about the most common parameters that need to be set for each OpenStack service. In particular, Keystone service information,
%SERVICE_TENANT_NAME%
, %SERVICE_USER%
, and %SERVICE_PASSWORD%
parameters must be set.
In general, for a manual installation, the following order of steps is recommended after installing the relevant OpenStack packages:
- Install and enable the RabbitMQ service:RabbitMQ provides support for the Advanced Message Queuing Protocol (AMQP), which is used for communication between all OpenStack services. Generally, a single node in the cloud is configured to run RabbitMQ.
global# pkg install rabbitmq global# svcadm enable rabbitmq
- Customize the Keystone configuration, if desired:Edit
/etc/keystone/keystone.conf
and then enable the service.global# su - keystone -c "keystone-manage pki_setup" global# svcadm enable keystone
- Populate the Keystone database:This can be done manually or by using the supplied convenience script.
global# su - keystone -c "/usr/demo/openstack/keystone/sample_data.sh"
- Customize the Cinder configuration, if desired:Edit
/etc/cinder/api-paste.ini
and/etc/cinder/cinder.conf
and then enable the services.- If you wish to use iSCSI for connectivity between your Nova instances and the back-end storage, comment out all of the
volume_driver
options in/etc/cinder/cinder.conf
except for the one specifyingcinder.volume.drivers.solaris.zfs.ZFSISCSIDriver
. - If you plan to export Fibre Channel LUNs using Cinder, the
volume_driver
option to leave uncommented is the one forcinder.volume.drivers.solaris.zfs.ZFSFCDriver
. - Finally, the
cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver
entry should be the only one uncommented if you wish to have Cinder create volumes from an Oracle ZFS Storage Appliance. In this case, there are additional parameters in/etc/cinder/cinder.conf
that will need to be adjusted to match the appliance configuration. For further details see theCinder driver on Oracle ZFS Storage Appliance README
global# svcadm enable cinder-db global# svcadm enable cinder-api cinder-backup cinder-scheduler global# svcadm enable -r cinder-volume:default
- If you wish to use iSCSI for connectivity between your Nova instances and the back-end storage, comment out all of the
- Customize the Glance configuration, if desired:Edit
/etc/glance/glance-api.conf
,/etc/glance/glance-cache.conf
,/etc/glance/glance-registry.conf
, and/etc/glance/glance-scrubber.conf
, and then enable the services.global# svcadm enable glance-db global# svcadm enable glance-api glance-registry glance-scrubber
If you used the OpenStack Unified Archive to get an OpenStack instance up and running, you will have noticed that it was preloaded with a pair of Glance images, one suitable for use with non-global zones and the other for kernel zones. If you set up OpenStack manually, you will need to add to Glance images that can be used with your first Nova instance. Unified Archives are the image format that is used for Oracle Solaris OpenStack. You can use the images Oracle has made available or create your own.The following shows how to capture a Unified Archive of a newly created non-global zone calledmyzone
, and then upload it to the Glance repository. In the example, the system in question is assumed to be a SPARC system. For an x86 system, the same commands would be used except thearchitecture
property would be set tox86_64
instead ofsparc64
.global# zonecfg -z myzone create global# zoneadm -z myzone install global# zlogin myzone 'sed /^PermitRootLogin/s/no$/without-password/ \ < /etc/ssh/sshd_config > /system/volatile/sed.$$ ; \ cp /system/volatile/sed.$$ /etc/ssh/sshd_config' global# archiveadm create -z myzone /var/tmp/myzone.uar global# glance \ --os-auth-url http://localhost:5000/v2.0 \ --os-username glance \ --os-password glance \ --os-tenant-name service \ image-create \ --container-format bare \ --disk-format raw \ --is-public true --name "Oracle Solaris 11.2 SPARC (non-global zone)" \ --property architecture=sparc64 \ --property hypervisor_type=solariszones \ --property vm_mode=solariszones < /var/tmp/myzone.uar
- Create SSH public keys:Create keys for the
evsuser
,neutron
, androot
users and append them toauthorized_keys
file forevsuser
.global# su - evsuser -c "ssh-keygen -N '' -f /var/user/evsuser/.ssh/id_rsa -t rsa" global# su - neutron -c "ssh-keygen -N '' -f /var/lib/neutron/.ssh/id_rsa -t rsa" global# su - root -c "ssh-keygen -N '' -f /root/.ssh/id_rsa -t rsa" global# cat /var/user/evsuser/.ssh/id_rsa.pub /var/lib/neutron/.ssh/id_rsa.pub \ /root/.ssh/id_rsa.pub >> /var/user/evsuser/.ssh/authorized_keys
- Verify SSH:For the same three accounts, verify that SSH connectivity is working correctly by using
ssh
(1) to connect asevsuser@localhost
. For these initial three SSH connections, answer yes to the question about wanting to continue to connect.global# su - evsuser -c "ssh evsuser@localhost whoami" global# su - neutron -c "ssh evsuser@localhost whoami" global# su - root -c "ssh evsuser@localhost whoami"
- Customize the Neutron configuration, if desired:Edit
/etc/neutron/neutron.conf
,/etc/neutron/plugins/evs/evs_plugin.ini
, and/etc/neutron/dhcp_agent.ini
, setting the address of the Elastic Virtual Switch controller, and then enable the services. For example, if the Elastic Virtual Switch controller is on the same system where Neutron and Nova services will be run, the following commands may be used.global# pkg install rad-evs-controller global# svcadm restart rad:local global# evsadm set-prop -p controller=ssh://evsuser@localhost global# evsadm global# svcadm enable neutron-dhcp-agent neutron-server
- Customize the Nova configuration, if desired:Edit
/etc/nova/api-paste.ini
and/etc/nova/nova.conf
and then enable the services.global# svcadm restart rad:local global# svcadm enable nova-conductor global# svcadm enable nova-api-ec2 nova-api-osapi-compute \ nova-cert nova-compute nova-objectstore nova-scheduler
- Customize Horizon:First, customize the Horizon configuration by copying either
openstack-dashboard-http.conf
oropenstack-dashboard-tls.conf
from/etc/apache2/2.2/samples-conf.d
into the Apache/etc/apache2/2.2/conf.d
directory. If TLS is going to be enabled, then the appropriate certificates need to be generated and installed and then the/etc/apache2/2.2/conf.d/openstack-dashboard-tls.conf
file needs to be edited to reflect the location of the installed certificates.For more information on creating self-signed certificates, see the "SSL/TLS Strong Encryption: FAQ." Finally, the default Apache instance should be enabled or restarted.global# cp /etc/apache2/2.2/samples-conf.d/openstack-dashboard-http.conf \ /etc/apache2/2.2/conf.d global# svcadm enable apache22
orglobal# DASHBOARD=/etc/openstack_dashboard global# openssl req -new -x509 -nodes -out horizon.crt -keyout horizon.key global# mv horizon.crt horizon.key ${DASHBOARD} global# chmod 0600 ${DASHBOARD}/horizon.* global# sed \ -e "/SSLCertificateFile/s:/path.*:${DASHBOARD}/horizon.crt:" \ -e "/SSLCACertificateFile/d" \ -e "/SSLCertificateKeyFile/s:/path.*:${DASHBOARD}/horizon.key:" \ < /etc/apache2/2.2/samples-conf.d/openstack-dashboard-http.conf \ > /etc/apache2/2.2/conf.d/openstack-dashboard-http.conf global# svcadm enable apache22
Booting Your First Nova Instance
After the OpenStack installation is complete and you have enabled the desired OpenStack services (this is mostly taken care of already if you used the Unified Archive), you can log in to the OpenStack dashboard (Horizon) to examine the system and get started with provisioning a trial virtual machine.
To log in to Horizon, point the browser to
http://<mysystem>/horizon
where mysystem
is the name of system that is running the Horizon service under the Apache web service. If you used the Unified Archive installation method or if you used the supplied/usr/demo/openstack/keystone/sample_data.sh
shell script for the manual installation method, the default cloud administrator login is admin
with a password of secrete
.
Figure 2. The OpenStack Horizon login screen
When you log in as the cloud administrator, there are two panels on the left side of the screen. The rightmost panel (Admin) is the default and is the administrator view. It allows you to see an overall view of the Nova instances and Cinder volumes in use within the cloud. It also allows you to view and edit the Flavor definitions that define virtual machine characteristics, such as the number of virtual CPUs, the amount of memory, and the disk space assigned to a VM. On Oracle Solaris, this is also where the brand of the underlying Oracle Solaris Zone is defined, such as
solaris
for non-global zones and solaris-kz
for kernel zones. Finally, from a system provisioning perspective, this panel also allows you to create virtual networks and routers for use by cloud users.
Figure 3. The OpenStack Horizon dashboard showing the administration panel
The other primary elements that the cloud administrator can view and edit concern projects (also known as tenants) and users. Projects provide a mechanism to group and isolate ownership of virtual computing resources and users, while users are the persons or services that use those resources in the cloud.
The leftmost panel of the OpenStack dashboard (Project) shows the project the user is using. For the admin user, this would be the demoproject. Clicking the panel provides a set of options a cloud user can perform as a user under this project. If the Unified Archive method of installation was used, clicking Images & Snapshots will reveal that the Glance service has been prepopulated with two images: one for non-global zone-based instances and the other for kernel zone-based instances. And under Access & Security, users can upload their own personal SSH public key to the Nova service. This public key is automatically placed in the
root
user's authorized_keys
file in the new instance, which allows a user to log in to the instance remotely.
To create a new instance, a cloud user (the admin or any multitenant user) simply needs to click Instances under Manage Compute. Clicking Launch Instance on the right side produces a dialog box where the cloud user can specify the type of image (by default, non-global zone or kernel zone are the choices), the name of the new instance and, finally, the flavor of the instance. The latter should match the zone type specified in the image type and the size chosen should reflect the requirements of the intended workload.
Under the Access & Security tab in the dialog box, you can choose which uploaded SSH keypair to install in the new instance to be created; and under the Network tab, you can choose which network(s) the instance should be attached to. Finally, clicking Launchcauses the instance to be created, installed, and then booted. The time required for a new instance to be made available depends on a number of factors including the size of the images, the resources provided in the flavor definition chosen, and where OpenStack has placed the root file system of the new instance.
In the Instances screen, you can click the name of the instance to see general information as well view the instance's console log. By reloading this particular page, you can see updates that have taken place.
Note that by clicking the Volumes label on the left side of the screen, you can see the Cinder volumes that have been created. Generally, each instance will have at least one volume assigned to it and displayed here. In a multinode configuration, this volume might be remote from the instance using a protocol such as iSCSI or Fibre Channel. Instances that are made of non-global zones have a volume assigned only if the volume is on a different node in the cloud.
By clicking the Network Topology label on the left side, you can see a visual representation of the cloud network including all subnet segments, virtual routers, and active instances.
By clicking the Images & Snapshots label, you should see the Unified Archives that have been uploaded into Glance.
Figure 4. The Oracle Solaris 11.2 non-global zone available in Images & Snapshots through Glance
Appendix A: Common Configuration Parameters for OpenStack
Each OpenStack service has many configuration options available through its configuration file. Some of these options are for features not supported on Oracle Solaris or for vendor-specific drivers. The OpenStack community documentation referred to earlier is the definitive source for the non-Oracle Solaris configuration parameters, but some of the most common parameters to adjust in either a single-node or multi-node configuration are shown in Table 2.
Table 2. Common ParametersConfiguration File | Option | Default Value | Common Alternate Values |
---|---|---|---|
/etc/cinder/api-paste.ini | auth_uri | http://127.0.0.1:5000/v2.0 | URI for public Keystone service |
identity_uri | http://127.0.0.1:35357 | URI for administrative Keystone service | |
admin_tenant_name | %SERVICE_TENANT_NAME% (must be configured) | service | |
admin_user | %SERVICE_USER% (must be configured) | cinder | |
admin_password | %SERVICE_PASSWORD% (must be configured) | cinder | |
/etc/cinder/cinder.conf | sql_connection | sqlite:///$state_path/$sqlite_db | URI for remote MySQL database |
glance_host | $my_ip | Host name or IP address of Glance service | |
auth_strategy | keystone | keystone | |
rabbit_host | localhost | Host name or IP address of RabbitMQ service | |
volume_driver | cinder.volume.drivers.solaris.zfs.ZFSVolumeDriver | cinder.volume.drivers.solaris.zfs.ZFSISCSIDriver | |
zfs_volume_base | rpool/cinder | Alternate ZFS pool/data set | |
/etc/glance/glance-api.conf | sql_connection | sqlite:////var/lib/glance/glance.sqlite | URI for remote MySQL database |
rabbit_host | localhost | Host name or IP address of RabbitMQ service | |
auth_uri | http://127.0.0.1:5000/v2.0 | URI for public Keystone service | |
identity_uri | http://127.0.0.1:35357 | URI for administrative Keystone service | |
admin_tenant_name | %SERVICE_TENANT_NAME% (must be configured) | service | |
admin_user | %SERVICE_USER% (must be configured) | glance | |
admin_password | %SERVICE_PASSWORD% (must be configured) | glance | |
/etc/glance/glance-cache.conf | auth_url | http://127.0.0.1:5000/v2.0/ | URI for public Keystone service |
admin_tenant_name | %SERVICE_TENANT_NAME% (must be configured) | service | |
admin_user | %SERVICE_USER% (must be configured) | glance | |
admin_password | %SERVICE_PASSWORD% (must be configured) | glance | |
/etc/glance/glance-registry.conf | sql_connection | sqlite:////var/lib/glance/glance.sqlite | URI for remote MySQL database |
auth_uri | http://127.0.0.1:5000/v2.0 | URI for public Keystone service | |
identity_uri | http://127.0.0.1:35357 | URI for administrative Keystone service | |
admin_tenant_name | %SERVICE_TENANT_NAME% (must be set) | service | |
admin_user | %SERVICE_USER% (must be set) | glance | |
admin_password | %SERVICE_PASSWORD% (must be set) | glance | |
/etc/keystone/keystone.conf | admin_token | ADMIN | Token created using # openssl rand -hex 10 |
connection | sqlite:////var/lib/keystone/keystone.sqlite | URI for remote MySQL database | |
/etc/neutron/dhcp_agent.ini | evs_controller | ssh://evsuser@localhost | URI for Elastic Virtual Switch controller |
/etc/neutron/l3_agent.ini | router_id | Router UUID created using # neutron router-create | |
evs_controller | ssh://evsuser@localhost | URI for Elastic Virtual Switch controller | |
/etc/neutron/plugins/evs/evs_plugin.ini | evs_controller | ssh://evsuser@localhost | URI for Elastic Virtual Switch controller |
/etc/neutron/neutron.conf | rabbit_host | localhost | Host name or IP address of RabbitMQ service |
auth_uri | http://127.0.0.1:5000/v2.0 | URI for public Keystone service | |
identity_uri | http://127.0.0.1:35357 | URI for administrative Keystone service | |
admin_tenant_name | %SERVICE_TENANT_NAME% (must be set) | service | |
admin_user | %SERVICE_USER% (must be set) | neutron | |
admin_password | %SERVICE_PASSWORD% (must be set) | neutron | |
/etc/nova/api-paste.ini | auth_uri | http://127.0.0.1:5000/v2.0 | URI for public Keystone service |
identity_uri | http://127.0.0.1:35357 | URI for administrative Keystone service | |
admin_tenant_name | %SERVICE_TENANT_NAME% (must be set) | service | |
admin_user | %SERVICE_USER% (must be set) | nova | |
admin_password | %SERVICE_PASSWORD% (must be set) | nova | |
/etc/nova/nova.conf | glance_host | $my_ip | Host name or IP address of Glance service |
neutron_url | http://127.0.0.1:9696 | URI for Neutron service location | |
neutron_admin_username | <None> | neutron | |
neutron_admin_password | <None> | neutron | |
neutron_admin_tenant_name | <None> | service | |
neutron_admin_auth_url | http://localhost:5000/v2.0 | URI for Keystone service location | |
connection | sqlite:///$state_path/$sqlite_db | URI for remote MySQL database | |
rabbit_host | localhost | Host name or IP address of RabbitMQ service |
Appendix B: Known Limitations
In the initial OpenStack release included with Oracle Solaris 11.2, there are some limitations.
- There is no remote console access to instances via the OpenStack dashboard. Instead, users should upload a SSH keypair using Horizon, which will be pushed into the new instance's
authorized_keys
file forroot
. - At the current time, the version of Neutron included with Oracle Solaris 11.2 supports only a single plugin for network virtualization. As a result, only Nova nodes running Oracle Solaris are supported to the fullest extent.
See Also
See the OpenStack on Oracle Solaris Technology Spotlight web page.
Also see these additional resources:
- Download Oracle Solaris 11
- Access Oracle Solaris 11 product documentation
- Access all Oracle Solaris 11 how-to articles
- Learn more with Oracle Solaris 11 training and support
- See the official Oracle Solaris blog
- Check out The Observatory and OTN Garage blogs for Oracle Solaris tips and tricks
- Follow Oracle Solaris on Facebook and Twitter
About the Author
David Comay is a senior principal software engineer who has been at Sun and Oracle since 1996 when he began working in the networking area specializing in routing protocols and IPv6. He was the OS/Networking technical lead for the first two Oracle Solaris 8 update releases as well as for Oracle Solaris 9. He subsequently moved into the resource management area where he was a member of the original Oracle Solaris Zones project team. He led that team after its initial project integration through the end of Oracle Solaris 10 and for several of the subsequent Oracle Solaris 10 update releases. After driving the Oracle Solaris Modernization program and being the technical lead for the OpenSolaris binary releases as well as for Oracle Solaris 11, David is now the architect for the Oracle Solaris cloud strategy focusing initially on the integration of OpenStack with Oracle Solaris.
OpenStack Cloud Computing --- "
ResponderEliminarOpen Stack Cloud Computing Online Training
Send ur Enquiry to contact@21cssindia.com
Introduction
Virtualization Overview
Software as a service
platform as a service
Infrastructure as a Servi" more… Online Training- Corporate Training- IT Support U Can Reach Us On +917386622889 - +919000444287 http://www.21cssindia.com/courses/open-stack-online-training-224.html
ResponderEliminarIt is very useful and knowledgeable. Therefore, I would like to thank you for the efforts you have made in writing this article.
C9200-48T-E
C9300-24T-E
C9300-24t-A
C9500-NM-8X