Abiquo Documentation Cookies Policy

Our Documentation website uses cookies to improve your experience. Please visit our Cookie Policy page for more information about cookies and how we use them.


Documentation

Skip to end of metadata
Go to start of metadata


Introduction

When a user requests a deploy, reconfigure or undeploy, your organization may require an automatic or manual intervention. The Abiquo workflow feature can suspend user tasks, send details of them to a workflow connector (which will interact with a workflow tool), and wait for a response. Abiquo sends the request to continue a task as a webhook  (http://en.wikipedia.org/wiki/Webhook)

To use the Abiquo workflow feature, create a connector for integration with your workflow system. The workflow connector should receive HTTP requests, process them and obtain a response from your workflow system, then respond to Abiquo.  Your workflow system can be a sophisticated tool that  defines and coordinates different workflows with 3rd-party components or departments, and so on. Or it could be a simple email notification system that asks for your managers' click of approval.

 

Workflow notes

  • Should I use Workflow or Outbound API? If you do not need to suspend tasks in Abiquo until responses are received from the workflow tool, Abiquo recommends that you use the  Outbound API
  • You cannot modify and continue a job that has been paused by the workflow feature, you can only continue or cancel it. In the workflow feature, Abiquo creates the user's job and pauses it before it goes to the virtual factory
Scope of the workflow feature

The Abiquo workflow feature controls the deploy, reconfigure and undeploy of virtual machines and virtual appliances (groups of virtual machines)

Diagram of deploy process with workflow

This diagram shows a simplified process of how Abiquo deploy will function with workflow.

  1. When the user requests a deploy, undeploy or reconfigure of a virtual machine, Abiquo will check that they are allowed to perform this action and that resources are available.
  2. Abiquo then creates the job and holds it. The virtual machine displays a status of "Waiting for review".
  3. Abiquo sends a message to the workflow endpoint with the information about the user's task(s) and the link(s) to continue or cancel the task(s)
  4. The workflow tool can send an answering request to Abiquo to continue or cancel the task(s)
  5. Abiquo will proceed accordingly

Manual Requests
Note that there are also two manual requests in the Abiquo API or UI (Manage Workflow Tasks)
  1. The user can cancel their own task(s)
  2. The manager user can continue or cancel the task(s)

Workflow Integration Reference

This section provides more details about the workflow integration

Create a workflow integration

To create a workflow integration do these steps:

  1. Create a workflow connector to process workflow requests:
    1. Receive a POST request by HTTP with details of the requested operation. See #Sample Request to Workflow Connector
      1. Check the details and #Links in the request object
      2. If necessary, obtain more information from the Abiquo API or other systems
    2. Determine if the user request should be continued or cancelled, using the workflow tool or manually
    3. Send a POST request to the continue or cancel link in the original workflow request object
      1. You can add a plain text message body, which will be added to the task
      2. See #Sample workflow continue request and #Sample workflow cancel request
  2. In Abiquo:
    1. Create a role for the workflow connector to access Abiquo with the following privileges
      1. Manage workflow - WORKFLOW_OVERRIDE privilege
      2. Sufficient privileges to enable the user to read data the workflow connector needs to access
    2. Create an Abiquo workflow user with the new role
    3. You can register the workflow connector in the Abiquo tenants using OAuth. See Authentication#OAuthv1.0VersionAAuthentication
    4. #Configure workflow on the platform
    5. #Configure workflow per cloud tenant
Sample request to workflow connector

This is a sample request sent to the workflow connector by Abiquo. It shows the Task DTO of a deploy task with three jobs: schedule, configure and power on. 

For testing, you can use a request inspection service, such as requestb.in, which will display the POST request from the Abiquo API

 Click here to show/hide the Abiquo request to workflow

Header text:

POST request

FORM/POST PARAMETERS
None


HEADERS
Content-Length: 2626
User-Agent: Java/1.8.0_51
Total-Route-Time: 0
Accept: text/plain
Connection: close
Host: requestb.in
Content-Type: application/vnd.abiquo.tasks+json
Cache-Control: no-cache
Pragma: no-cache
X-Request-Id: 0c069e96-ccc2-4338-8b76-c911314b136c
Connect-Time: 1
Via: 1.1 vegur

Request payload:

{  
   "links":[  

   ],
   "collection":[  
      {  
         "links":[  
            {  
               "diskController":null,
               "diskControllerType":null,
               "diskLabel":null,
               "length":null,
               "title":null,
               "hreflang":null,
               "rel":"self",
               "type":null,
               "href":"cloud/virtualdatacenters/1/virtualappliances/1/virtualmachines/4/tasks/d89acdfd-e875-4d45-aacc-3c4b86b98572"
            },
            {  
               "diskController":null,
               "diskControllerType":null,
               "diskLabel":null,
               "length":null,
               "title":null,
               "hreflang":null,
               "rel":"parent",
               "type":null,
               "href":"cloud/virtualdatacenters/1/virtualappliances/1/virtualmachines/4/tasks"
            },
            {  
               "diskController":null,
               "diskControllerType":null,
               "diskLabel":null,
               "length":null,
               "title":null,
               "hreflang":null,
               "rel":"target",
               "type":null,
               "href":"cloud/virtualdatacenters/1/virtualappliances/1/virtualmachines/4"
            },
            {  
               "diskController":null,
               "diskControllerType":null,
               "diskLabel":null,
               "length":null,
               "title":null,
               "hreflang":null,
               "rel":"user",
               "type":null,
               "href":"admin/enterprises/1/users/1"
            },
            {  
               "diskController":null,
               "diskControllerType":null,
               "diskLabel":null,
               "length":null,
               "title":null,
               "hreflang":null,
               "rel":"cancel",
               "type":null,
               "href":"cloud/virtualdatacenters/1/virtualappliances/1/virtualmachines/4/tasks/d89acdfd-e875-4d45-aacc-3c4b86b98572/action/cancel"
            },
            {  
               "diskController":null,
               "diskControllerType":null,
               "diskLabel":null,
               "length":null,
               "title":null,
               "hreflang":null,
               "rel":"continue",
               "type":null,
               "href":"cloud/virtualdatacenters/1/virtualappliances/1/virtualmachines/4/tasks/d89acdfd-e875-4d45-aacc-3c4b86b98572/action/continue"
            }
         ],
         "taskId":"d89acdfd-e875-4d45-aacc-3c4b86b98572",
         "userId":"1",
         "type":"DEPLOY",
         "ownerId":"4",
         "state":"QUEUEING",
         "creationTimestamp":1459348714,
         "timestamp":0,
         "jobs":{  
            "links":[  

            ],
            "collection":[  
               {  
                  "links":[  

                  ],
                  "id":"d89acdfd-e875-4d45-aacc-3c4b86b98572.d4759c5f-7661-48aa-a713-9fd324e58873",
                  "parentTaskId":"d89acdfd-e875-4d45-aacc-3c4b86b98572",
                  "type":"SCHEDULE",
                  "description":"Deploy task's schedule on virtual machine with id 4",
                  "state":"PENDING",
                  "rollbackState":"UNKNOWN",
                  "creationTimestamp":1459348714,
                  "timestamp":0
               },
               {  
                  "links":[  

                  ],
                  "id":"d89acdfd-e875-4d45-aacc-3c4b86b98572.2d19f782-8266-480a-a492-cea61dfbd82f",
                  "parentTaskId":"d89acdfd-e875-4d45-aacc-3c4b86b98572",
                  "type":"CONFIGURE",
                  "description":"Deploy task's configure on virtual machine with id 4",
                  "state":"PENDING",
                  "rollbackState":"UNKNOWN",
                  "creationTimestamp":1459348714,
                  "timestamp":0
               },
               {  
                  "links":[  

                  ],
                  "id":"d89acdfd-e875-4d45-aacc-3c4b86b98572.d40ca41d-02cf-49cd-a363-5894fe5e723e",
                  "parentTaskId":"d89acdfd-e875-4d45-aacc-3c4b86b98572",
                  "type":"POWER_ON",
                  "description":"Deploy task's power on on virtual machine with id 4",
                  "state":"PENDING",
                  "rollbackState":"UNKNOWN",
                  "creationTimestamp":1459348714,
                  "timestamp":0
               }
            ],
            "totalSize":null
         }
      }
   ],
   "totalSize":null
}

All links sent to the workflow endpoint will contain only the path segment of the URI  because Abiquo might be behind a load balancer, firewall, etc.  So the workflow connector must have the IP address of the Abiquo server.

For example, consider an API on a host with IP 10.0.0.4, but exposed publicly with  company.com/abiquo/api . If Abiquo sent the full URI, the links would be in the form  http://10.0.0.4/api/object/task. However, the API might not be accessible at that IP (due to iptables, firewalls, etc) but at  http://company.com/abiquo/api/object/task . When it is building the links, Abiquo does not have all the information, e.g. company.com/abiquo/api, so it sends the link with the href object/ task.

Sample workflow continue request

Here is an example request that the workflow tool would send to Abiquo to continue the task. Note the message text is "accepted".  

API example removed: POST_cld_vdcs_X_vapps_X_vms_X_tsks_X_act_continue_CT_tp
You can download the API examples archive from ABI38Confluence-space-export-152334-314.html.zip


Sample workflow cancel request

Here is an example of a request that the workflow tool would send to Abiquo to cancel a task. Note the message text is "cancelled".

API example removed: POST_cld_vdcs_X_vapps_X_vms_X_tsks_X_act_cancel_CT_tp
You can download the API examples archive from ABI38Confluence-space-export-152334-314.html.zip

Configure workflow on the platform

The Workflow endpoint will apply globally for all cloud tenants and users. By default the feature is disabled on the platform and for each tenant. After you enable workflow on the platform, enable it for each tenant through the UI or the API.

To configure the workflow in the GUI:

  1. Go to Configuration view and under System properties open the General page. 
  2. Enter the Workflow endpoint. The workflow endpoint is the URL of the web application that connects to the workflow tool
    • Abiquo will send tasks for workflow review to this URL
    • If you do not have the URL for the workflow endpoint, check with your system administrator
      • The sample endpoint shown in the following screenshot is for testing only
  3. Click the checkbox to Enable workflow for the platform
  4. Save

If you enable workflow before you enter the workflow endpoint, tasks in progress will fail because of the missing endpoint. Therefore, when configuring workflow, enter an endpoint first before you enable the workflow.

To configure workflow integration, with the API:

Use the SystemPropertyResource perform two POST requests in this order to set: 

  1. client.main.workflowEndPoint  
    • URL for communication with the workflow tool
  2. client.main.workflowEnabled
    • 0 = false; disabled; default value
    • 1 = true; enabled
Configure workflow per cloud tenant

To enable workflow for a tenant in the UI, edit the enterprise and on the General information tab, select Enable workflow.

It is also possible to enable workflow for an enterprise using the API. 

To configure workflow for an individual enteprise, retrieve the enterprise using the API, then modify the workflow attribute and use a put request to save your modification.
Get an enterprise

You can get an enterprise by text in the enterprise name. For example, if the enterprise name is "enterprise_bar_enterprise", it can be found by the text "bar".

 Click here to show or hide the request example

Unable to render {include} The included page could not be found.

This request returns a list of enterprises that contain the text string. In this case, there is only one enterprise.

By default the workflow attribute is set to false.

Update the enterprise

Copy the enterprise DTO only and set the workflow attribute to true. Then perform a put request to update the enterprise.

 Click here to show or hide the put request

Unable to render {include} The included page could not be found.

Manual workflow operations

If there is a problem, you can always cancel your own tasks. And administrators can manually continue or cancel their users' tasks.

Before you manage tasks with the API, first check how you can  Manage Workflow Tasks  using the GUI.

Cancel your own queued tasks using the API

To cancel your own tasks in the API:

  1. Use the following request to obtain the pending tasks

    GET /api/admin/enterprises/{idEnterprise}/users/{idUsers}/action/pendingtasks

    Note that "action" is used in this path because the tasks are not performed on the user entity.  These tasks are ordered by time in descending order, with the capacity to filter tasks.  In each task there is a cancel link

  2. Send a request to the cancel link to set the task to CANCELLED. 

    POST http://localhost:9009/api/cloud/virtualdatacenters/1/virtualappliances/1/virtualmachines/235/tasks/94c9cb31-f9bd-4d92-844f-906bd610e9dd/action/cancel

    If you send a message body, Abiquo will store the content in the extradata of the task.

Example: See #Sample workflow cancel request

Additional example: The following query retrieves two tasks.

 Click here to show or hide the request example

API example removed: GET_adm_ents_X_users_X_act_pendingtsks_AC_tsks_j
You can download the API examples archive from ABI38Confluence-space-export-152334-314.html.zip

The cancel link for the first task retrieved with the above query is

        {
          "href": "http://localhost:9009/api/cloud/virtualdatacenters/1/virtualappliances/1/virtualmachines/235/tasks/94c9cb31-f9bd-4d92-844f-906bd610e9dd/action/cancel", 
          "rel": "cancel"
        },

Here is an example of a task in  XML  format.

 Click here to show or hide the data example
    <task>
        <link rel="self" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb"/>
        <link rel="parent" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4"/>
        <link rel="cancel" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb/action/cancel"/>
        <link rel="continue" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb/action/continue"/>
        <link rel="target" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4"/>
        <link rel="user" href="http://10.60.1.21:80/api/admin/enterprises/10/users/3"/>
        <taskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</taskId>
        <userId>3</userId>
        <type>DEPLOY</type>
        <ownerId>4</ownerId>
        <state>QUEUEING</state>
        <creationTimestamp>1370448460</creationTimestamp>
        <timestamp>1370448460</timestamp>
        <jobs>
            <job>
                <id>9b34d9de-4a7f-407e-9ea5-b7a149f569bb.bb4731df-88b4-44c5-88e5-fc4cc85e14ce</id>
                <parentTaskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</parentTaskId>
                <type>SCHEDULE</type>
                <description>Deploy task's schedule on virtual machine with id 4</description>
                <state>PENDING</state>
                <rollbackState>UNKNOWN</rollbackState>
                <creationTimestamp>1370448460</creationTimestamp>
                <timestamp>1370448460</timestamp>
            </job>
            <job>
                <id>9b34d9de-4a7f-407e-9ea5-b7a149f569bb.48977a87-5a61-41ad-aa9c-8897b732ff3b</id>
                <parentTaskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</parentTaskId>
                <type>CONFIGURE</type>
                <description>Deploy task's configure on virtual machine with id 4</description>
                <state>PENDING</state>
                <rollbackState>UNKNOWN</rollbackState>
                <creationTimestamp>1370448460</creationTimestamp>
                <timestamp>1370448460</timestamp>
            </job>
            <job>
                <id>9b34d9de-4a7f-407e-9ea5-b7a149f569bb.08324e31-021c-4d5a-9c08-119382cd8013</id>
                <parentTaskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</parentTaskId>
                <type>POWER_ON</type>
                <description>Deploy task's power on on virtual machine with id 4</description>
                <state>PENDING</state>
                <rollbackState>UNKNOWN</rollbackState>
                <creationTimestamp>1370448460</creationTimestamp>
                <timestamp>1370448460</timestamp>
            </job>
        </jobs>
    </task>

As you can see from the above example, the cancel link for an XML task is in the following format.

    <link rel="cancel" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb/action/cancel"/>

 

Cancel or continue tasks for other users

If a user has the WORKFLOW_OVERRIDE API privilege, she can manage workflow tasks for the users who belong to the enterprises in her scope. This means she can start (continue, update to PENDING state) or cancel (update to CANCELLED) tasks in the QUEUEING state.

To manage tasks of your users:

  1. Retrieve all pending tasks, with a GET of /api/admin/enterprises/{identerprise}/action/pendingTasks 
    OR
    Filter tasks by user by retrieving pending tasks for any of the tenant's users, with a GET of /api/admin/enterprises/{idEnterprise}/users/{idUsers}/action/pendingtasks
  2. Perform a POST on the  continue link  (tasks/{uuidTask}/action/continue)   or cancel link   ( tasks/{uuidTask}/action/cancel). If a message is provided the content will be stored as extradata with key  workflow .
URLs of Abiquo API workflow methods

The paths to access workflow through the API are:

PathUserComments

api/object/task
e.g. api/cloud/virtualdatacenters/{idvdc}/virtualappliances/{idvapp}/virtualmachines/{idvm}/tasks

Workflow userBase of link that workflow connector uses to cancel or continue tasks
api/admin/enterprises/{identerprise}/user/{iduser}/action/pendingtasksUserGet own tasks (user can then cancel own tasks, with the above link)
api/admin/enterprises/{identerprise}/action/pendingtasksTenant adminUser can get all tasks for the enterprise 

Technical Reference

This section contains the detailed technical information about the workflow feature.

Workflow architecture

The workflow is integrated at task level. In the Task data transfer object (DTO), we have added the owner link and the user link. We have added two new states to a task called "QUEUEING" and "CANCELLED". When tasks have been started by the workflow integration or overridden by user intervention, they are set to "PENDING". 

Pending tasks and Queueing Tasks

The workflow feature has been added to the virtual factory task workflow. This workflow already includes a PENDING state for tasks, which refers to tasks that are waiting to be processed by the virtual factory. Therefore the task state QUEUEING is used in the API for /pendingtasks. After tasks have been started by the workflow process they will be put in the state PENDING.

Full workflow flow example
Undeploy virtual machine     

Task flow: POWER_OFF (optional) in virtual factory, DECONFIGURE in virtual factory, FREE_RESOURCES in API.

When the *client.main.workflowEnabled* is 1 and client.main.workflowEndPoint is set to  http://workflow:80.

A user requests a undeploy of a virtual machine. All synchronous checks are performed and OK. We create the task and store it in Redis in the QUEUING state and it is not queued in RabbitMQ. An HTTP request (http://en.wikipedia.org/wiki/Webhook) to the client.main.workflowEndPoint with the representation of the task (taskDto) and the credentials to authenticate the request that must update the task. The expected result for this request is a 204 No content. If a 4xx or 5xx is returned the task is updated to CANCELLED.

If the *client.main.workflowEnabled* were 0, then the state would be updated to PENDING and queued.

At this point the undeploy request returns with a 202 code and the taskdto. The virtual machine is in state LOCKED.

At some point the workflow tool performs a POST on the tasks/{uuidTask}/continue to set to PENDING and queued or tasks/{uuidTask}/cancel to update the task CANCELLED and its execution is aborted. To authenticate the request must be authenticated with the credentials that are configured in the workflow. If a message is provided in the request, it will be added to the extradata key workflow.

Reconfigure virtual machine

Task flow: UPDATE_RESOURCE in API, RECONFIGURE in virtual factory.

When the *client.main.workflowEnabled* is 1 and client.main.workflowEndPoint is set to  http://workflow:80.

A user requests a reconfigure. All synchronous checks are performed and OK. If the reconfigure does not perform a task on remote services the update in DB is performed and the request ends here with a 200 OK. We create the task and store it in Redis in QUEUING state and it is not queued in RabbitMQ. An HTTP request (http://en.wikipedia.org/wiki/Webhook) to the client.main.workflowEndPoint with the representation of the task (taskDto) and the credentials to authenticate the request that must update the task. The expected result for this request is a 204 No content. If a 4xx or 5xx is returned the task is updated to CANCELLED.

If the *client.main.workflowEnabled* were 0 then the state would be updated to PENDING and queued.

At this point the reconfigure request is finished and returns a 202. The virtual machine is in state LOCKED.

At some point the workflow tool performs a POST on the tasks/{uuidTask}/continue to set to PENDING and queued or tasks/{uuidTask}/cancel to update the task CANCELLED and its execution is aborted. To authenticate the request must be authenticated with the credentials that are configured in the workflow connector. If a message is provided in the request, it will be added to the extradata key workflow.

Deploy virtual machine

Task flow: SCHEDULE in API, CONFIGURE virtual factory, POWER_ON virtual factory.

When the *client.main.workflowEnabled* is 1 and client.main.workflowEndPoint is set to  http://workflow:80.

A user requests a deploy of a virtual machine. All synchronous checks are performed and OK. We create the task and store it in redis in QUEUING state and is not queued in rabbit. An http request (http://en.wikipedia.org/wiki/Webhook) to the client.main.workflowEndPoint with the representation of the task (taskDto) and a credentials to authenticate the request that must update the task. The expected result for this request is a 204 No content. If a 4xx or 5xx is returned the task is updated to CANCELLED.

If the *client.main.workflowEnabled* were 0 then the state would be updated to PENDING and queued for allocation.

At this point the undeploy request returns with a 202 code and the taskdto. The virtual machine is in state LOCKED.

At some point the workflow tool performs a POST on the tasks/{uuidTask}/continue to set to PENDING and queued or tasks/{uuidTask}/cancel to update the task CANCELLED and its execution is aborted. To authenticate the request must be authenticated with the credentials that are configured in the workflow. If a message is provided in the request this will be added to the extradata key workflow.

Undeploy virtual appliance

Flow: Iteration of all of the virtual machines and creation of one task each: POWER_OFF (optional) in virtual factory, DECONFIGURE in virtual factory, FREE_RESOURCES in API.

When the *client.main.workflowEnabled* is 1 and client.main.workflowEndPoint is set to  http://workflow:80.

A user requests a undeploy of a virtual appliance. All synchronous checks are performed and OK. We create the task per virtual machine and store it in redis in QUEUING state and is not queued in rabbit per virtual machine. This task created has a extra data field with a generated value of a workflow batch. This identifier is only stored in redis as a part of the extras and its purpose is to group tasks. This field can be used to group in UI. An http request (http://en.wikipedia.org/wiki/Webhook) to the client.main.workflowEndPoint with the representation of the all of tasks links (tasksDto) and a credentials per taskDto link to authenticate the request that must update the task. This is the same that we return to the user. The expected result for this request is a 204 No content. If a 4xx or 5xx is returned the task is updated to CANCELLED.

If the *client.main.workflowEnabled* were 0 then the state would be updated to PENDING and queued.

At this point the undeploy request returns with a 202 code and the taskdto. The virtual machine is in state LOCKED.

At some point the workflow tool performs a POST on each tasks/{uuidTask}/continue to set to PENDING and queued or tasks/{uuidTask}/cancel to update the task CANCELLED and its execution is aborted. If a message is provided in the request this will be added to the extradata key workflow.

Deploy virtual appliance

Iteration flow: SCHEDULE in API, CONFIGURE virtual factory, POWER_ON virtual factory.

When the *client.main.workflowEnabled* is 1 and client.main.workflowEndPoint is set to  http://workflow:80.

A user requests a deploy of a virtual appliance. All synchronous checks are performed and OK. We create the task per virtual machine and store it in redis in QUEUING state and it is not queued in rabbit. This task has an extra data field with a generated value of a workflow batch. This identifier is only stored in redis as part of the extras and its purpose is to group tasks. This field can be used to group in UI. An http request (http://en.wikipedia.org/wiki/Webhook) to the *client.main.workflowEndPoint* with the representation of all the tasks (tasksDto) and a credentials per taskDto link to authenticate the request that must update the task. This is the same that we return to the user. The expected status code is 204 No content. If a 4xx or 5xx is returned the task is updated to CANCELLED.

If the *client.main.workflowEnabled* were 0, then the state would be updated to PENDING and queued.

At this point the undeploy request returns with a 202 code and the taskdto. The virtual machine is in state LOCKED.

At some point the workflow tool performs a POST on each tasks/{uuidTask}/continue to set to PENDING and queued or tasks/{uuidTask}/cancel to update the task CANCELLED and its execution is aborted. If a message is provided in the request this will be added to the extradata key workflow.