Abiquo 2.6


Skip to end of metadata
Go to start of metadata

 

Introduction

Workflow tools are used by enterprises to define and coordinate different workflows with 3rd-party components, departments, and so on.

In Abiquo 2.6, we have added webhooks to enable customers to create connectors with workflow tools. This feature could be used for manual configuration requests or approvals, for example.

This feature launches a webhook (http://en.wikipedia.org/wiki/Webhook) before queuing a job in the virtual factory. In summary, a workflow request is sent to the workflow endpoint and Abiquo waits for a response from the workflow tool to continue or cancel the operation. 

Workflow or Outbound API?

If you do not need to suspend tasks in Abiquo until responses are received from the workflow tool, Abiquo recommends that you use the Outbound API.

Scope

Abiquo workflow is used with the following entities and jobs:

  • virtual appliance
    • deploy 
    • undeploy
  • virtual machine
    • deploy
    • undeploy
    • reconfigure
Diagram of deploy process with workflow

This diagram shows a simplified process of how Abiquo deploy will function with workflow.

Integrating with the Workflow Tool

The Abiquo workflow integration will send a workflow request to the workflow end point. This is a POST with a list of tasks for workflow review. Abiquo waits for the workflow connector to respond to the Abiquo API. The workflow connector should send a POST to the continue or cancel link contained in each task. The workflow tool can also send a message about the task reviewed.

Workflow user

The Abiquo API is secured and every request must be performed with the appropriate credentials.  Therefore the administrator should create a workflow user for the workflow connector to access Abiquo. Ideally this user will have only read permission for data the workflow tool needs to access and the WORKFLOW_OVERRIDE privilege.

Architecture

The workflow is integrated at task level. In the Task data transfer object (DTO), we have added the owner link and the user link. We have added two new states to a task called "QUEUEING" and "CANCELLED". When tasks have been started by the workflow integration or overridden by user intervention, they are set to "PENDING". 

Pending tasks and Queueing Tasks

The workflow feature has been added to the virtual factory task workflow. This workflow already includes a PENDING state for tasks, which refers to tasks that are waiting to be processed by the virtual factory. Therefore the task state QUEUEING is used in the API for /pendingtasks. After tasks have been started by the workflow process they will be put in the state PENDING.

Workflow Paths

The paths to access workflow through the API are:

PathComments

api/object/task
e.g. api/cloud/virtualdatacenters/{idvdc}/virtualappliances/{idvapp}/virtualmachines/{idvm}/tasks

Workflow tool cancel or continue links
api/admin/enterprises/{identerprise}/user/{iduser}/action/pendingtasksLink for the user to get their own tasks (user can then cancel own tasks, as above)
api/admin/enterprises/{identerprise}/action/pendingtasksUser can get all tasks for the enterprise 

All links sent to the workflow endpoint will contain only the path segment of the URI. This is because Abiquo might be behind a load balancer or another mechanism. This should not be a problem because the workflow connector must know how to access Abiquo. As an example, consider an API on a host with IP 10.0.0.4, but exposed publicly with company.com/abiquo/api. If Abiquo sent the full URI, the links would be in the form http://10.0.0.4/api/object/task. However, the API might not be accessible at that IP (due to iptables, firewalls, etc) but at  http://company.com/abiquo/api/object/task. When it is producing the links, Abiquo does not have all the information, e.g. company.com/abiquo/api, so it sends the link with the href object/ task.

Request to Workflow Tool Connector

This is a sample request sent to the workflow tool by Abiquo. It shows the Task DTO of a deploy task with three jobs: schedule, configure and power on.

POST /11s9yz51 HTTP/1.1
User-Agent: Java/1.7.0_21
Pragma: no-cache
Host: requestb.in
Content-Type: application/vnd.abiquo.tasks+xml;version=2.6
Content-Length: 2153
Connection: close
Cache-Control: no-cache
Accept: text/plain

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<tasks>
   <task>
      <link rel="self" href="cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb"/>
      <link rel="parent" href="cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks"/>
      <link rel="target" href="cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4"/>
      <link rel="user" href="admin/enterprises/10/users/3"/>
      <link rel="cancel" href="cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb/action/cancel"/>
      <link rel="continue" href="cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb/action/continue"/>
      <taskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</taskId>
      <userId>3</userId>
      <type>DEPLOY</type>
      <ownerId>4</ownerId>
      <state>QUEUEING</state>
      <creationTimestamp>1370448460</creationTimestamp>
      <timestamp>0</timestamp>
      <jobs>
         <job>
             <id>9b34d9de-4a7f-407e-9ea5-b7a149f569bb.bb4731df-88b4-44c5-88e5-fc4cc85e14ce</id>
             <parentTaskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</parentTaskId>
             <type>SCHEDULE</type>
             <description>Deploy task's schedule on virtual machine with id 4</description>
             <state>PENDING</state>
             <rollbackState>UNKNOWN</rollbackState>
             <creationTimestamp>1370448460</creationTimestamp>
             <timestamp>0</timestamp>
         </job>
         <job>
             <id>9b34d9de-4a7f-407e-9ea5-b7a149f569bb.48977a87-5a61-41ad-aa9c-8897b732ff3b</id>
             <parentTaskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</parentTaskId>
             <type>CONFIGURE</type>
             <description>Deploy task's configure on virtual machine with id 4</description>
             <state>PENDING</state>
             <rollbackState>UNKNOWN</rollbackState>
             <creationTimestamp>1370448460</creationTimestamp>
             <timestamp>0</timestamp>
         </job>
         <job>
             <id>9b34d9de-4a7f-407e-9ea5-b7a149f569bb.08324e31-021c-4d5a-9c08-119382cd8013</id>
             <parentTaskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</parentTaskId>
             <type>POWER_ON</type>
             <description>Deploy task's power on on virtual machine with id 4</description>
             <state>PENDING</state>
             <rollbackState>UNKNOWN</rollbackState>
             <creationTimestamp>1370448460</creationTimestamp>
             <timestamp>0</timestamp>
         </job>
     </jobs>
   </task>
</tasks>
Workflow Connector Response

The workflow connector should send a POST request to one of the links provided in the Task DTO to either cancel or start (continue link) the job. The following code block shows these links.

      <link rel="cancel" href="cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb/action/cancel"/>
      <link rel="continue" href="cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb/action/continue"/>
Message

The workflow connector can send a plain text message body with the post request. The content of the message will be stored in task extradata with the key workflow.

Building the Request

Here is an example request that the workflow tool would send to Abiquo to continue the task. Note the message text is "Approved".

curl -X POST http://10.60.1.21/api/cloud/virtualdatacenters/1/virtualappliances/3/virtualmachines/3/tasks/a0856853-aa9a-4e93-a24f-3f5d5f87a4c5/action/continue -H 'Content-Type: text/plain' --verbose -u admin:xabiquo -d 'Approved' | xmlindent -nbe -f

And an example 204 response for success.

curl -X POST http://10.60.1.21/api/cloud/virtualdatacenters/1/virtualappliances/3/virtualmachines/3/tasks/a0856853-aa9a-4e93-a24f-3f5d5f87a4c5/action/continue -H 'Content-Type: text/plain' --verbose -u admin:xabiquo -d 'Approved' | xmlindent -nbe -f
* About to connect() to 10.60.1.21 port 80 (#0)
*   Trying 10.60.1.21...   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0connected
* Server auth using Basic with user 'admin'
> POST /api/cloud/virtualdatacenters/1/virtualappliances/3/virtualmachines/3/tasks/a0856853-aa9a-4e93-a24f-3f5d5f87a4c5/action/continue HTTP/1.1
> Authorization: Basic YWRtaW46eGFiaXF1bw==
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: 10.60.1.21
> Accept: */*
> Content-Type: text/plain
> Content-Length: 8
> 
} [data not shown]
* upload completely sent off: 8out of 8 bytes
< HTTP/1.1 204 No Content
< Server: Apache-Coyote/1.1
< Set-Cookie: auth=YWRtaW46MTM3MDUxODY5NzQwNTowOWNhMzNhYTU5YjkxNDYyZGUwYWQ4M2YyNzBjZjQ1ZTpBQklRVU8; Expires=Thu, 06-Jun-2013 11:38:17 GMT; Path=/api
< Set-Cookie: JSESSIONID=9827BB197231E8E873ABA1B63635FE05; Path=/api
< Date: Thu, 06 Jun 2013 11:08:17 GMT
< 
100     8    0     0  100     8      0     35 --:--:-- --:--:-- --:--:--    35
* Connection #0 to host 10.60.1.21 left intact
* Closing connection #0
Configuring Workflow
Configure Workflow Using the GUI

Configure Workflow Using the API

As seen in the above GUI configuration, two new SystemProperty elements have been added to Abiquo for workflow configuration.

The privileges to "Access Configuration view" and "Modify configuration data" are required to access the SystemPropertyResource

To configure workflow integration, perform two POST requests in this order to set: 

  1. client.main.workflowEndPoint  
    • URL for communication with the workflow tool
  2. client.main.workflowEnabled
    • 0 = false; disabled; default value
    • 1 = true; enabled
Cancel Your Queued Tasks Using the API

Before managing workflow tasks in the API, we recommend that readers should already know how to Manage Workflow Tasks using the GUI.

The path for the user to cancel her own tasks is /api/admin/enterprises/{idEnterprise}/users/{idUsers}/action/pendingtasks.

Note that "action" is used in this path because the tasks are not performed on the user entity.

A GET request will return the user's queued tasks, ordered by time in descending order, with the capacity to filter tasks.

GET /api/admin/enterprises/{identerprise}/users/{iduser}/action/pendingtasks returns a list of tasks.

In each task there is a cancel link. A POST to that link will set the task to CANCELLED. If a message is provided, the content will be stored in extradata workflow.

The following Task DTO was retrieved using the method described above; note the cancel link.

    <task>
        <link rel="self" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb"/>
        <link rel="parent" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4"/>
        <link rel="cancel" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb/action/cancel"/>
        <link rel="continue" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb/action/continue"/>
        <link rel="target" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4"/>
        <link rel="user" href="http://10.60.1.21:80/api/admin/enterprises/10/users/3"/>
        <taskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</taskId>
        <userId>3</userId>
        <type>DEPLOY</type>
        <ownerId>4</ownerId>
        <state>QUEUEING</state>
        <creationTimestamp>1370448460</creationTimestamp>
        <timestamp>1370448460</timestamp>
        <jobs>
            <job>
                <id>9b34d9de-4a7f-407e-9ea5-b7a149f569bb.bb4731df-88b4-44c5-88e5-fc4cc85e14ce</id>
                <parentTaskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</parentTaskId>
                <type>SCHEDULE</type>
                <description>Deploy task's schedule on virtual machine with id 4</description>
                <state>PENDING</state>
                <rollbackState>UNKNOWN</rollbackState>
                <creationTimestamp>1370448460</creationTimestamp>
                <timestamp>1370448460</timestamp>
            </job>
            <job>
                <id>9b34d9de-4a7f-407e-9ea5-b7a149f569bb.48977a87-5a61-41ad-aa9c-8897b732ff3b</id>
                <parentTaskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</parentTaskId>
                <type>CONFIGURE</type>
                <description>Deploy task's configure on virtual machine with id 4</description>
                <state>PENDING</state>
                <rollbackState>UNKNOWN</rollbackState>
                <creationTimestamp>1370448460</creationTimestamp>
                <timestamp>1370448460</timestamp>
            </job>
            <job>
                <id>9b34d9de-4a7f-407e-9ea5-b7a149f569bb.08324e31-021c-4d5a-9c08-119382cd8013</id>
                <parentTaskId>9b34d9de-4a7f-407e-9ea5-b7a149f569bb</parentTaskId>
                <type>POWER_ON</type>
                <description>Deploy task's power on on virtual machine with id 4</description>
                <state>PENDING</state>
                <rollbackState>UNKNOWN</rollbackState>
                <creationTimestamp>1370448460</creationTimestamp>
                <timestamp>1370448460</timestamp>
            </job>
        </jobs>
    </task>

The cancel link for the above task is

    <link rel="cancel" href="http://10.60.1.21:80/api/cloud/virtualdatacenters/1/virtualappliances/4/virtualmachines/4/tasks/9b34d9de-4a7f-407e-9ea5-b7a149f569bb/action/cancel"/>
Cancel Tasks for Other Users

If a user has the WORKFLOW_OVERRIDE API privilege, she can manage workflow tasks for the users who belong to the enterprises in her scope. This means she can start (continue, update to PENDING state) or cancel (update to CANCELLED) tasks in the QUEUEING state.

To do so, she must perform a GET of /api/admin/enterprises/{identerprise}/action/pendingTasks to retrieve the list of tasks.

Then the user must perform a POST on the continue link (tasks/{uuidTask}/action/continue) or cancel link (tasks/{uuidTask}/action/cancel). If a message is provided the content will be stored as extradata with key workflow.

The user can also filter tasks by user by retrieving pending tasks for any of the enterprise users, with a GET of /api/admin/enterprises/{idEnterprise}/users/{idUsers}/action/pendingtasks.

Full workflow flow example
Undeploy virtual machine     

Task flow: POWER_OFF (optional) in virtual factory, DECONFIGURE in virtual factory, FREE_RESOURCES in API.

When the *client.main.workflowEnabled* is 1 and client.main.workflowEndPoint is set to  http://workflow:80.

A user requests a undeploy of a virtual machine. All synchronous checks are performed and OK. We create the task and store it in Redis in the QUEUING state and it is not queued in RabbitMQ. An HTTP request (http://en.wikipedia.org/wiki/Webhook) to the client.main.workflowEndPoint with the representation of the task (taskDto) and the credentials to authenticate the request that must update the task. The expected result for this request is a 204 No content. If a 4xx or 5xx is returned the task is updated to CANCELLED.

If the *client.main.workflowEnabled* were 0, then the state would be updated to PENDING and queued.

At this point the undeploy request returns with a 202 code and the taskdto. The virtual machine is in state LOCKED.

At some point the workflow tool performs a POST on the tasks/{uuidTask}/continue to set to PENDING and queued or tasks/{uuidTask}/cancel to update the task CANCELLED and its execution is aborted. To authenticate the request must be authenticated with the credentials that are configured in the workflow. If a message is provided in the request, it will be added to the extradata key workflow.

Reconfigure virtual machine

Task flow: UPDATE_RESOURCE in API, RECONFIGURE in virtual factory.

When the *client.main.workflowEnabled* is 1 and client.main.workflowEndPoint is set to  http://workflow:80.

A user requests a reconfigure. All synchronous checks are performed and OK. If the reconfigure does not perform a task on remote services the update in DB is performed and the request ends here with a 200 OK. We create the task and store it in Redis in QUEUING state and it is not queued in RabbitMQ. An HTTP request (http://en.wikipedia.org/wiki/Webhook) to the client.main.workflowEndPoint with the representation of the task (taskDto) and the credentials to authenticate the request that must update the task. The expected result for this request is a 204 No content. If a 4xx or 5xx is returned the task is updated to CANCELLED.

If the *client.main.workflowEnabled* were 0 then the state would be updated to PENDING and queued.

At this point the reconfigure request is finished and returns a 202. The virtual machine is in state LOCKED.

At some point the workflow tool performs a POST on the tasks/{uuidTask}/continue to set to PENDING and queued or tasks/{uuidTask}/cancel to update the task CANCELLED and its execution is aborted. To authenticate the request must be authenticated with the credentials that are configured in the workflow connector. If a message is provided in the request, it will be added to the extradata key workflow.

Deploy virtual machine

Task flow: SCHEDULE in API, CONFIGURE virtual factory, POWER_ON virtual factory.

When the *client.main.workflowEnabled* is 1 and client.main.workflowEndPoint is set to  http://workflow:80.

A user requests a deploy of a virtual machine. All synchronous checks are performed and OK. We create the task and store it in redis in QUEUING state and is not queued in rabbit. An http request (http://en.wikipedia.org/wiki/Webhook) to the client.main.workflowEndPoint with the representation of the task (taskDto) and a credentials to authenticate the request that must update the task. The expected result for this request is a 204 No content. If a 4xx or 5xx is returned the task is updated to CANCELLED.

If the *client.main.workflowEnabled* were 0 then the state would be updated to PENDING and queued for allocation.

At this point the undeploy request returns with a 202 code and the taskdto. The virtual machine is in state LOCKED.

At some point the workflow tool performs a POST on the tasks/{uuidTask}/continue to set to PENDING and queued or tasks/{uuidTask}/cancel to update the task CANCELLED and its execution is aborted. To authenticate the request must be authenticated with the credentials that are configured in the workflow. If a message is provided in the request this will be added to the extradata key workflow.

Undeploy virtual appliance

Flow: Iteration of all of the virtual machines and creation of one task each: POWER_OFF (optional) in virtual factory, DECONFIGURE in virtual factory, FREE_RESOURCES in API.

When the *client.main.workflowEnabled* is 1 and client.main.workflowEndPoint is set to  http://workflow:80.

A user requests a undeploy of a virtual appliance. All synchronous checks are performed and OK. We create the task per virtual machine and store it in redis in QUEUING state and is not queued in rabbit per virtual machine. This task created has a extra data field with a generated value of a workflow batch. This identifier is only stored in redis as a part of the extras and its purpose is to group tasks. This field can be used to group in UI. An http request (http://en.wikipedia.org/wiki/Webhook) to the client.main.workflowEndPoint with the representation of the all of tasks links (tasksDto) and a credentials per taskDto link to authenticate the request that must update the task. This is the same that we return to the user. The expected result for this request is a 204 No content. If a 4xx or 5xx is returned the task is updated to CANCELLED.

If the *client.main.workflowEnabled* were 0 then the state would be updated to PENDING and queued.

At this point the undeploy request returns with a 202 code and the taskdto. The virtual machine is in state LOCKED.

At some point the workflow tool performs a POST on each tasks/{uuidTask}/continue to set to PENDING and queued or tasks/{uuidTask}/cancel to update the task CANCELLED and its execution is aborted. If a message is provided in the request this will be added to the extradata key workflow.

Deploy virtual appliance

Iteration flow: SCHEDULE in API, CONFIGURE virtual factory, POWER_ON virtual factory.

When the *client.main.workflowEnabled* is 1 and client.main.workflowEndPoint is set to  http://workflow:80.

A user requests a deploy of a virtual appliance. All synchronous checks are performed and OK. We create the task per virtual machine and store it in redis in QUEUING state and is not queued in rabbit. This task created has a extra data field with a generated value of a workflow batch. This identifier is only stored in redis as a part of the extras and its purpose is to group tasks. This field can be used to group in UI An http request (http://en.wikipedia.org/wiki/Webhook) to the *client.main.workflowEndPoint* with the representation of all the task (tasksDto) and a credentials per taskDto link to authenticate the request that must update the task. This is the same that we return to the user. This is the same that we return to the user.The expected result for this request is a 204 No content. If a 4xx or 5xx is returned the task is updated to CANCELLED.

If the *client.main.workflowEnabled* were 0 then the state would be updated to PENDING and queued.

At this point the undeploy request returns with a 202 code and the taskdto. The virtual machine is in state LOCKED.

At some point the workflow tool performs a POST on each tasks/{uuidTask}/continue to set to PENDING and queued or tasks/{uuidTask}/cancel to update the task CANCELLED and its execution is aborted. If a message is provided in the request this will be added to the extradata key workflow.