Abiquo Documentation Cookies Policy

Our Documentation website uses cookies to improve your experience. Please visit our Cookie Policy page for more information about cookies and how we use them.


Abiquo 4.7

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 33 Next »

Trial Requirements

To deploy a trial environment, you need to get and deploy the Abiquo Monolithic appliance in your infrastructure and configure it accordingly to your environment. Optionally, you can deploy also the Abiquo monitoring appliance, but this is not requirement.

The following components requirements must be fulfilled to deploy and Abiquo trial environment:

  • Hypervisors
  • Network
  • VM repository folder
  • Abiquo Monolithic appliance
  • Abiquo Monitoring appliance

Some appliance requirements are common, while others are specific to the appliance functions and requirements inside the platform.

It is a good idea to document the required information for each component in the corresponding worksheet in the links below. We provide some example and empty worksheets for you to document all the Abiquo platform relevant configuration details:

The link below shows an example worksheet:

Infrastructure

The following requirements must be fulfilled in the infrastructure for Abiquo deployment to work as expected:

ComponentNotes
VM repository folderA unique NFS or CIFS dedicated shared folder must be present in each Abiquo DC before deployment. Check NFS Server Setup for more details
HypervisorsAbiquo recommends ESXi with vCenter, Hyper-V or KVM. Check your HVs requirements in the Cloud Node documentation
Clock synchronization

Platform clocks must be synchronized. Abiquo will use NTP to keep platform clocks synchronized. Check your vendor documentation for other components

Service networkA trunk capable network segment with VLAN tagging capabilities.

For versions, see Compatibility Tables 

For server specifications, see Trial Requirements

Network

Every time a VM is deployed, Abiquo creates its NIC's and attaches them to the corresponding networks inside the service network accordingly.

 Abiquo manages the service network in different ways depending on the underlying network solution:

  • Standard VLAN network management
  • SDN management

Standard VLAN management uses ISC DHCP servers with OMAPI support and DHCP relay servers to manage the IP networks. Check the links below for further details:

Also, Abiquo can inject network connections into the VMs using cloud-init or hypervisor tools, which requires VM templates that support these methods. Abiquo will use this option if the DHCP server is not found. 

For SDN as VMware NSX and OpenStack Neutron, check the configuration requirements in the links below:

Abiquo appliances

These are the common requirements for all Abiquo platform appliances. All appliances require these details during the bootstrap process, and you must have them in place before the deployment:

Parameter

Notes

Friendly hostnameThis is the name the appliance will use internally. Abiquo does not use this for anything and can be freely choosen.

Management IP address

IP of the management NIC

Management IP netmask

Netmask of the management NIC

Default gatewayDefault gateway in the management network
DNS server listIPs of DNS servers separated by a blank space
NTP server listAbiquo appliances use NTP to fulfill the platform clock synchronization requirements
Monolithic

You will need to enter the parameters below during the server appliance installation:

Parameter

Notes

Server FQDN

This FQDN must be resolvable from customer premises, in order for SSL and the Appliance manager to work as expected. Abiquo will use self-signed certificates for trial environments

Abiquo Datacenter IDInternal ID of the trial DC. It must be unique for each DC, and all remote services and v2v appliances inside the DC must use the same datacenter ID
NFS Template repository NFS share for the VM repository of the DC. Each private DC must have a different NFS

The monolithic appliance must be reachable from customer premises through its FQDN at the ports below:

Source

Destination port

Notes

Anywhere

TCP/80 (HTTP)

It will be redirected to port 443 by default

AnywhereTCP/443 (HTTPS)Proxy port for API/AM/UI services
AnywhereTCP/41337 (HTTPS)Proxy port for VNC services
Monotoring applianceTCP/5672RabbitMQ default port for Abiquo components to exchange messages
Monitoring

The Abiquo monitoring appliance is completely optional, and you can ignore this part safely. To deploy the Monitoring aplpiance, you need the requirements below:

Parameter

Notes

Server management IP addressThis is the Abiquo server IP management address, which will be used to connect to the RabbitMQ services (Port 5672)

 The remote services appliance must be reachable from the server appliance at the ports below:

Source

Destination port

Notes

Monolithic appliance

TCP/36638 (HTTP)

Monitoring services

VM repository folder

In Abiquo, all DCs must have their own VM repository folder, which holds the VM templates for all enterprises in each DC. The Remote Services and HVs must be able to mount and manage this folder through NFS or CIFS (Depending on the underlying HV technology)

When you configure the platform, enter the location of the VM repository folder, and the platform will automatically configure the appliances to use this folder as VM repository and mount it on the HVs when needed automatically.

For further details, please check the VM repository documentation at the link below:

Platform networks

The main platform networks to configure for Abiquo are:

  • Management network: for managing the cloud platform, including monitoring of infrastructure and VMs, and deployment of VM templates
  • Service network: for VM communications using VLANs. Requires VLAN support. See  Configuring the Service Network for Cloud Tenant Networks
  • Storage network: for virtual storage on external storage devices, optional

Configure each of these networks as a separate network.

 Click here to show/hide the network diagram

Network inputs

You will need to configure your firewalls to enable the following ports and communications.

Platform and Storage Servers

Source

Destination

Input Port

Description

Monolithic Server

Nexenta storage agent

TCP 2000

Nexenta API

Monolithic Server

NetApp storage connector

TCP 80

NetApp API

Monolithic Server

LVM storage connector

TCP 8180

Abiquo LVMSCSI server

Hypervisor, Abiquo V2V

Any Storage host

TCP 3260

Volumes on the storage host iSCSI

Abiquo Monitoring Server with KairosDB

SourceDestinationInput PortDescription
Monolithic ServerMonitoring ServerTCP 8080KairosDB
NFS Server

Source

Destination

Input Port

Description

Monolithic Server, Hypervisor

NFS

TCP/UDP 2049

NFS

Monolithic Server, Hypervisor

NFS

TCP/UDP 111

RPC

Hypervisors
SourceDestinationInput PortDescription
Monolithic ServerESXi

TCP 80 

 
Monolithic ServerESXiTCP 443 
Monolithic ServerHyper-VTCP 135 
Monolithic ServerKVMTCP 8889 
Monolithic ServerXenServerTCP 443 
Monolithic ServerOracle VMTCP 7002 
Monolithic ServerPublic cloudTCP 443 
 Click here to show/hide the network diagram

 

 

Deploy the Abiquo appliances

Download the Abiquo Server, Remote Services and V2V appliances and deploy a VM from each. During their deployment, you will need the values of each OVA properties as described in the previous sections.

The steps to install the appliances on ESXi are these:

  1. Download the appliance OVA from the corresponding link to your desktop
  2. Open the vCenter vSphere web client
  3. Go to Host and clusters section in the client side panel
  4. Go to Actions and click on Deploy OVF template
  5. Select Local file and choose the OVA you downloaded in step 1. Click Next
  6. The vSphere client will display the OVA details. Click Next
  7. Accept the EULA and click Next
  8. Edit the deployed VM name and choose where you want to deploy it. After this, click Next
  9. Select which cluster or host will be used for deploy and click Next
  10. Choose the datastore for the deployed VM and click Next
  11. Select the network destination for the VM management NIC and click Next
  12. Enter the template details depending on the OVA you are deploying. For more details, see the parameters table above. When configured, click Next
  13. Review the deployment configuration details. If they are OK, you can select the "Power on after deployment" checkbox. But if you need to add extra NICs and HDDs to the VM before deployment, do not select it. Click Finish

Appliances Post-install steps

  1. After installation, Abiquo will display the UI login URL. Remember to change your Abiquo passwords as soon as possible.
  2. Change the root password in the appliances
  3. Template conversions and uploads will consume space on the server until they move the the NFS VM repository. So, if you wish to use big templates, extend the appropiate filesystems accordingly.
  4. Edit the monitoring appliance properties to set the data retention policy. See the following link for further details: Abiquo monitoring appliance configuration

  5. review the Abiquo properties file at /opt/abiquo/config/abiquo.properties For trial deployments, you will probably configure some of the following properties:

    • For Hyper-V or XenServer hypervisors, you may need to configure the repository location properties which are described as part of the hypervisor configuration

    • To use a monitoring server, enter monitoring server IP address and port, and enable monitoring

After changing the properties, remember to restart the Abiquo Tomcat service always:

service abiquo-tomcat restart

For monitoring to work, on the Remote Services appliance, edit the /opt/abiquo/config/abiquo.properties file, add the properties below and restart the abiquo-tomcat service after. Remember to replace the MONITORING_IP by the monitoring appliance management IP:

# Enable/disable monitoring and alarms
abiquo.monitoring.enabled = true
 
# Address where watchtower is located
abiquo.watchtower.host=MONITORING_IP

# Port where watchtower is listening
abiquo.watchtower.port=36638

Configure Abiquo UI

  1. To change the UI text or translate it into different languages, create language files and configure the UI to work with them

  2. To configure Abiquo to work offline
  3. To customize the UI look and feel, creating your own themes, see Abiquo Branding Guide

See also the complete guide to user interface configuration at Configure Abiquo UI

For more information about the configuration described in this section, see Configuring Abiquo

Log In to the Abiquo web client

Now open your web browser and type in the site address for the Abiquo server.

https://<myabiquo.example.com>/ui

The default cloud administrator login username and password are "admin" and "xabiquo", respectively. Remember to change your password as soon as you log in for the first time.

If you change the name and/or password of the "default_outbound_api_user" in Abiquo, you will need to change them in the Abiquo properties file. The Abiquo "M" module completes event details in Abiquo and streams events in the outbound API. For more details see Abiquo Configuration Properties#Configure Abiquo Events Properties

Add a License

Review the link below to get an Abiquo license from our side and set it on the platform:

Configure a platform license

Further steps

For an overview of the Abiquo platform work through the tutorials built in to the product or try the Abiquo Quick Tutorial.

The following components are optional for an Abiquo trial environment. We will describe how to configure a Monitoring appliance in this document later. Check the links for further details on each component.

ComponentNotes
Abiquo Monitoring Server Monitoring can be enabled or disabled in the trial environment
Public Cloud providersAWS, Azure ARM, or other supported providers. Optional, See Public cloud provider features
CollectdUse Abiquo collectd plugin installation 
Persistent StorageAny NFS storage (not available for Hyper-V), vFiler on NetApp, Nexenta
OpenStack NeutronWith KVM hypervisors and OpenStack clouds
VMware NSXFor firewalls and load balancers use the Advanced edition or higher
Jaspersoft ReportsFor Abiquo reports. Separate server for JasperServer Community and Abiquo Reports modules
Chef ServerLocal Chef server or hosted Chef and access to cookbooks. Requires cloud-init templates

 


  • No labels