Abiquo Documentation Cookies Policy

Our Documentation website uses cookies to improve your experience. Please visit our Cookie Policy page for more information about cookies and how we use them.


Abiquo 4.7

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

 

Trial requirements

The following configurations are supported for an Abiquo trial environment.

ComponentsConfigurationsNotes
Abiquo Monolithic ServerRuns as VM on ESXi or KVMVLAN tagging configuration is required
NFS ServerNFS on separate server is recommended (not hypervisor datastore)Required to store VM templates
Persistent StorageAny NFS storage (not available for Hyper-V), vFiler on NetApp, NexentaOptional
HypervisorsESXi (with vCenter), KVMConfigure ESXi or install Abiquo KVM
Public Cloud providersAWS, Azure ARM, or other supported providersOptional, See Public cloud provider features
Abiquo Monitoring Server Optional install; monitoring can be disabled in the trial
Collectd Optional Abiquo collectd plugin installation
OpenStack NeutronWith KVM hypervisors and OpenStack clouds 
VMware NSXFor firewalls and load balancers use the Advanced edition or higher 

For versions, see Compatibility Tables 

For server specifications, see Trial Requirements

 

Monolithic Install Preparation Table

You will need to enter the following parameters about your environment during the monolithic install. These parameters configure your system to interact with the platform prerequisites that you must configure prior to the install.

Parameter _______________

Value

Notes

Management IP address

_____._____._____._____

Interface of the VM in the management network

Management IP netmask

_____._____._____._____

Netmask of the management network

Default gateway
_____._____._____._____
Default gateway for the VM in the management network
DNS server list
_____._____._____._____ _____._____._____._____
IPs of DNS servers for the VM, separated by a blank space
Abiquo Datacenter ID
__________________
 For intercommunication of datacenter components. The datacenter must be the same on all Remote Services and V2V servers in the same datacenter, and unique to this datacenter.

NFS Template repository

_____._____._____._____:___________________________ 

IP address and directory of NFS/CIFS Repository server to store the VM template repository. 

Each private cloud datacenter (and therefore each Remote Services and V2V server) must have its own NFS template repository, except private OpenStack cloud and Docker

By default the directory is /opt/vm_repository.

Abiquo API Server FQDN

_____._____._____._____

The fully qualified domain name of the Abiquo Server, in order for the Appliance manager to work with HTTPS for VM template operations.

 

Platform networks

The main platform networks to configure for Abiquo are:

  • Management network: for managing the cloud platform, including monitoring of infrastructure and VMs, and deployment of VM templates
  • Service network: for VM communications using VLANs. Requires VLAN support. See  Configuring the Service Network for Cloud Tenant Networks
  • Storage network: for virtual storage on external storage devices, optional

Configure each of these networks as a separate network.

 Click here to show/hide the network diagram

Within an Abiquo datacenter, every hypervisor should be connected to the top-of-rack switch to connect:

  • to the management network in access mode
  • to the storage network in access mode
  • to the service network in trunk mode.
 Click here to show/hide the screenshot

Cloud networks

The Service Network is a VLAN network that includes the following virtual network types:

  • Private networks within the virtual datacenter
  • External networks belonging to enterprises that allow access to networks outside the virtual datacenter
  • Unmanaged networks belonging to enterprises that are assigned IP addresses outside the Abiquo environment
  • Public networks using public IP addresses for VMs

 

 Click here to show/hide the diagram


 

Abiquo can manage the service network with standard networking or software defined networking (SDN).

For standard networking, Abiquo management of the service network includes the following:

  • Virtual networks as separate VLANs
    • The Network Administrator configures VLANs in the top-of-rack switches
    • The Cloud Administrator enters the VLAN configuration in Abiquo. This is done when creating a rack, where the administrator supplies a range for private networks or when creating a network.
  • When a VM is deployed, Abiquo generates a unique MAC address and binds it to an IP from the appropriate subnet and assigns it to the appropriate VLAN
  • Network assignment can use the following options
    • ISC DHCP Servers: Abiquo can remotely manage ISC DHCP Servers, so the DHCP server will always lease the right IP to the MAC address on a VM.
      • The Network Administrator configures the DHCP server or relay server so that it listens to VLANs and can be reached over a network from the VMs (on the service network) so they can obtain the IP leases
        • Abiquo recommends the use of a DHCP relay server to provide VLAN support. See Configuring DHCP in the Administrator's Guide for information about how Abiquo uses DHCP
    • Guest setup: Abiquo can inject the network connection into a VM using cloud-init or hypervisor tools, which requires templates that support these methods. Abiquo will use this option if the DHCP server is not found

For SDN, Abiquo has integrations with popular systems, such as VMware NSX and OpenStack Neutron. 

Network inputs

You will need to configure your firewalls to enable the following ports and communications.

Abiquo Monolithic Server

Source

Destination

Input Port

Description

Client

Abiquo Monolithic Server

TCP 80

HTTP

ClientAbiquo Monolithic ServerTCP 443HTTPS

 

Platform and Storage Servers

Source

Destination

Input Port

Description

Monolithic Server

Nexenta storage agent

TCP 2000

Nexenta API

Monolithic Server

NetApp storage connector

TCP 80

NetApp API

Monolithic Server

LVM storage connector

TCP 8180

Abiquo LVMSCSI server

Hypervisor, Abiquo V2V

Any Storage host

TCP 3260

Volumes on the storage host iSCSI

 

Abiquo Monitoring Server with KairosDB
SourceDestinationInput PortDescription
Monolithic ServerMonitoring ServerTCP 8080KairosDB
NFS Server

Source

Destination

Input Port

Description

Monolithic Server, Hypervisor

NFS

TCP/UDP 2049

NFS

Monolithic Server, Hypervisor

NFS

TCP/UDP 111

RPC

Hypervisors
SourceDestinationInput PortDescription
Monolithic ServerESXi

TCP 80 

 
Monolithic ServerESXiTCP 443 
Monolithic ServerHyper-VTCP 135 
Monolithic ServerKVMTCP 8889 
Monolithic ServerXenServerTCP 443 
Monolithic ServerOracle VMTCP 7002 
Monolithic ServerPublic cloudTCP 443 

Allow ICMP

ICMP must be allowed between all components of the Abiquo platform, including hypervisors.

 Click here to show/hide the network diagram

 

 

NFS server

Error rendering macro 'excerpt-include' : No link could be created for 'doc:NFS Server Setup'.

Optional storage server

Abiquo can manage an external storage platform to offer self-service virtual storage to users. External storage can be used to allocate external storage volumes to Virtual Appliances, or to convert VM Templates to Persistent Virtual Templates. Persistent templates save a VM disk to the storage platform, allowing you to take advantage of storage features such as high availability. Use of external storage is optional, although some features may not be available if external storage is not available.

Abiquo has an NFS integration that can be used to create self-service iSCSI volumes on NFS storage, which is suitable for use in Proof of Concept tests. To use NFS storage, simply add the storage share to Abiquo as a storage device. Or you can install the  Abiquo LVM storage server. See Trial Storage Server.

Hypervisors

Abiquo recommends VMware hypervisors or Abiquo RedHat KVM hypervisors for the proof of concept test. Other hypervisors are supported as described in the Cloud Node documentation.  

Monitoring and metrics server

The Abiquo VM monitoring and metrics feature enables you to obtain a rapid and convenient overview of VM performance through the Abiquo cloud console. Abiquo supports monitoring and metrics of VMs on the AWS, Azure, ESXi, KVM and XenServer, as well as Docker. ESXi hypervisors must be connected to a vCenter server

Supported monitoring and metrics:

  • Built-in metrics provided by each hypervisor or public cloud plugin and fetched periodically by the monitor manager
  • Custom metrics that can be configured and populated through the Abiquo API. After you create a metric, you can push the metric’s datapoints to Watchtower using the Abiquo API
  • Custom metrics that can be configured in collectd and populated through the Abiquo collectd plugin

Test environment: install Monitoring OVA that includes:

  1. Watchtower server
  2. KairosDB: can run without Cassandra using default KairosDB configuration with h2 datastore
  3. Cassandra: can run on same server as KairosDB in small installations or separate server

Install Abiquo Monitoring Appliance

The Abiquo Monitoring Appliance is a CentOS system with Abiquo watchtower, KairosDB and Cassandra on the same VM. For instructions on how to install it, see Quickstart Install Abiquo OVAs

After you complete the install, configure the following:

  1. Edit the kairosdb.properties file and set KairosDB properties, such as data retention time. The following link describes the KairosDB property for the time to live for a datapoint and Cassandra will automatically delete it after this time has elapsed: https://github.com/kairosdb/kairosdb/blob/v0.9.4/src/main/resources/kairosdb.properties#L91-L95

  2. In Cassandra.yaml configure seeds with KairsoDB IP (even though it's on the same server) and set Broadcast address as per Cassandra docs
  3. Firewall: You must open the appropriate port to enable the Abiquo Server and ALL the Remote Services servers (except separate V2V services) to access Watchtower  

Monitoring Post-install configuration

  1. Synchronize server clocks: you MUST configure and synchronize NTP across all Abiquo servers and all hypervisors before you enable metrics. Alerts will not be sent if the time is not synchronized on all servers, including the monitoring and metrics server and the hypervisors (ESXi and vCenter servers)
  2. On remote services (Monolithic and Remote Services, but not V2V services), edit the abiquo.properties file to add the following properties, then restart the Tomcat service.

    # Enable/disable monitoring and alarms
    abiquo.monitoring.enabled = true
     
    # Address where watchtower is located
    abiquo.watchtower.host=IP_OF_YOUR_WATCHTOWER_INSTALLATION
    
    # Port where watchtower is listening
    abiquo.watchtower.port=36638
  3. Optional: Configure KVM properties. See AIM - Abiquo Infrastructure Management .
    1. To support metric dimensions in KVM through AIM, there is a related abiquo.properties entry called abiquo.vsm.measures.pusher.frequency. See  Abiquo Configuration Properties#vsm
  4. Within Abiquo configure fetch and display metrics by default
    1. To set a default for all VMs to fetch metrics and display all metrics, set the "Enable VM monitoring by default" property in the general section of Configuration view. (See Configuration View#General ). This property will apply to all VMs created after you change the value of the property.
  5. Within Abiquo assign user privileges to work with monitoring, metrics, alarms and alerts as required

Use collectd to gather metrics

Abiquo supports collectd for custom metrics collection on Linux VMs. The Abiquo-collectd-plugin automatically pushes the values gathered by collectd to the Abiquo API. There is a new Abiquo API endpoint that the plugin uses to perform the push. As with other custom metrics, the Abiquo API then pushes the collectd metrics to the Abiquo Monitoring Server. You can use this method for hypervisors and providers where Abiquo does not support built-in metrics.

Install and configure collectd

To install and configure collectd:

  1. In Abiquo, the tenant administrator or user configures the collectd application (authentication and permissions) using OAuth. The user must have the “Allow user to push own metrics” privilege and assign this privilege to the plugin.
  2. Users install collectd on their VMs, and then install the Abiquo collectd plugin (from source or using the Abiquo Chef cookbook).
  3. The user configures their metrics in collectd.
  4. The Abiquo collectd plugin automatically pushes all collectd metrics to the Abiquo API in PUTVAL JSON format.

Monolithic server

Download the Abiquo Monolithic OVA. Before you begin, you will need the values of the following OVA properties for step 12 of the deploy.

Unable to render {include} The included page could not be found.

Unable to render {include} The included page could not be found.

Error rendering macro 'excerpt-include' : No link could be created for 'doc:Quickstart Install Abiquo OVAs'.

Post-install steps

  1. After installation, Abiquo will display the address where you can log in and the login details. Remember to change your Abiquo passwords as soon as possible.
  2. The Monolithic Server is a CentOS system. After install, log in and configure the server, including:
    1. Configure system time (ntpdate and ntp)
    2. Change the root password in CentOS
    3. Your template conversions will be created on this server, so if you wish to use larger templates, extend the default 10 GB logical volume or mount another volume at /opt/v2v-conversions. Completed conversions will be stored on the NFS Repository
    4. Load SSL certificates
  3. Configure Abiquo properties. The default Abiquo properties file is as follows:

    /opt/abiquo/config/abiquo.properties

    For a monolithic deployment, you will probably configure some of the following properties:


    1. To use a monitoring server, enter monitoring server IP address and port, and enable monitoring
    2. For Hyper-V or XenServer hypervisors, you may need to configure the repository location properties which are described as part of the hypervisor configuration
    3. For virtualized KVM servers, enter the fullVirt propertyThere is a complete guide at Abiquo Configuration Properties.
  4. After configuring the properties, remember to restart the Abiquo Tomcat server

    service abiquo-tomcat restart
  5. Configure Abiquo UI, for example: 
    1. To change the UI text or translate it into different languages, create language files and configure the UI to work with them
    2. To configure Abiquo to work offline
    3. To customize the UI look and feel, creating your own themes, see Abiquo Branding Guide
    See also the complete guide to user interface configuration at Configure Abiquo UI

For more information about the configuration described in this section, see Configuring Abiquo

Log In to the Abiquo web client

Now open your web browser and type in the site address for the Abiquo server.

https://<myabiquo.example.com>/ui

The default cloud administrator login username and password are "admin" and "xabiquo", respectively. Remember to change your password as soon as you log in for the first time.

If you change the name and/or password of the "default_outbound_api_user" in Abiquo, you will need to change them in the Abiquo properties file. The Abiquo "M" module completes event details in Abiquo and streams events in the outbound API. For more details see Abiquo Configuration Properties#Configure Abiquo Events Properties

Add a License

See Configuration View#License Management

Quick Tutorial

For an overview of the Abiquo platform work through the tutorials built in to the product or try the Abiquo Quick Tutorial.

 

  • No labels