top of page

4
US Patent Granted

8
US Patent Pending

3
INTERNATIONAL PUBLICATION

Error Remediation in Software as a Service (SaaS) portals

Examples of error remediation in SaaS portals is disclosed. In an example, activity data in a SaaS portal is monitored, where the SaaS portal is from amongst a plurality of SaaS portals grouped in a portal group. The activity data includes a record of events associated with a service request processed by the SaaS portal, an alert notification on occurrence of an error in the SaaS portal, and user information relating to credentials provided by a user accessing the SaaS portal. A runtime error in the SaaS portal may be detected based on the monitored activity data. The runtime error is based on faults in a service or an Application Programming Interface (API) associated with the SaaS portal, a security vulnerability, and non-compliance with a configuration policy governing the SaaS portal. A remediation measure for the runtime error is determined and implemented in the portal group.

Restoration of Cloud Management Platform

In some examples, a method includes identifying, via the use of a Representational State Transfer (REST) Application Programming Interface (API) call, a modification of persistent data for a Cloud Management Platform (CMP); storing, in a persistent log for the CMP, information about the data modification, including an operation that modified the data, a component of the CMP that modified the data, and the time of modification; determining, as a result of a failure of the CMP, a restoration point for the CMP based on the persistent log; and restoring the CMP to the determined restoration point using an independent restoration system of the component that modified the data.

Creating a highly-available private cloud gateway based on a two-node hyper-converged infrastructure cluster with a self-hosted hypervisor management system

Embodiments described herein are generally directed to a creation of an HA private cloud gateway based on a two-node HCI cluster with a self-hosted HMS. According to an example, a request to register a private cloud to be supported by on-premises infrastructure is received by a SaaS portal, which causes a base station to discover servers within the on-premises infrastructure. The base station is then instructed to prepare a server as a deployment node for use in connection with creation of a cluster of two HCI nodes of the servers to represent the HA private cloud gateway, including installing a seed HMS on the deployment node. The base station is further instructed to cause the seed HMS to create the cluster, install a self-hosted HMS within the cluster to manage the cluster, register the cluster to the self-hosted HMS, and finally delete the seed HMS from the deployment node.

Containerised Application Deployment

In some examples, a method includes: (a) reading a manifest file containing information regarding an application running on one or more Virtual Machines (VMs), wherein the information includes application topology, credentials, and configuration details; (b) receiving instructions to re-deploy the application from the one or more VMs to a container environment; (c) discovering, based on information in the manifest file, application consumption attributes including attributes of storage, computer, and network resources consumed by a workload of the application; (d) deploying the application on the container environment to produce a containerized application; (e) copying configuration details from the manifest file to the containerized application; (f) migrating, based on information in the manifest file and the discovered application consumption attributes, stateful data to the containerized application; and (g) validating the containerized application functionality.

Unified Container Orchestration Controller

A system to facilitate a container orchestration cloud service platform is described. The system includes a controller to manage Kubernetes cluster life-cycle operations created by each of a plurality of providers. The controller includes one or more processors to execute a controller micro service to discover a provider plugin associated with each of the plurality of providers, and perform the cluster life-cycle operations at a container orchestration platform as a broker for each of a plurality of providers.

Upgrade of hosts hosting application units of a container-based application based on analysis of the historical workload pattern of the cluster

Example implementations relate to a upgrade of a host that hosts application units of a container-based application. According to an example, monitoring is performed to identify new system software component availability for the cluster. When a new system software component is available, a historical workload pattern of the cluster is analyzed to identify an upgrade window for each host of the cluster. When the upgrade window arrives for a host, it is determined whether reconfiguration of an application is to be performed based on a capacity of the cluster. When the determination is affirmative, a reconfiguration option for the application is identified and a configuration of the application is adjusted accordingly. The host may then be drained, removed from the cluster, upgraded, added back into the cluster and any application configuration changes can be reversed.

Determining and implementing a feasible resource optimization plan for public cloud consumption

Example implementations relate to determining and implementing a feasible resource optimization plan for public cloud consumption. Telemetry data over a period of time is obtained for a current deployment of virtual infrastructure resources within a current data center of a cloud provider that supports an existing service and an application deployed on the virtual infrastructure resources. Information regarding a set of constraints to be imposed on a resource optimization plan is obtained. Indicators of resource consumption relating to the currently deployed virtual infrastructure resources during the period of time are identified by applying a deep learning algorithm to the telemetry data. A resource optimization plan is determined that is feasible within the set of constraints based on a costing model associated with resources of an alternative data center of the cloud provider, the indicators of resource consumption and costs associated with the current deployment.

Prioritizing migration of data associated with a stateful application based on data access patterns

Example implementations relate to migration of a stateful application from a source computing environment to a destination virtualized computing environment by prioritizing migration of data of the application based on a priority map created based on data usage patterns. An instance of the application is installed within the destination environment. The priority map includes priorities for chunks of the data based on historical data access patterns. The data is migrated from a source volume of the source environment to a destination volume of the destination environment on a chunk-by-chunk basis by performing a background data migration process based on the priority map. Usage of the application concurrent with the data migration process is facilitated by abstracting a location of data being operated upon by the application by maintaining migration status for the chunks. The priority map is periodically updated based on observed data access patterns post application migration.

Selection of deployment environment for application(s)

Example techniques for the selection of deployment environments for applications are described. The deployment environments may be container-based deployment environments. In an example, the selection may be performed based on the historical behavior of an application.

Execution of functions by cluster of computing nodes

Example techniques for execution of functions by clusters of computing nodes are described. In an example, if a cluster does not have resources available for executing a function for handling a service request, the cluster may request another cluster for executing the function. A result of execution of the function may be received by the cluster and used for handling the service request.

Proactively protecting service endpoints based on deep learning of user location and access patterns

Example implementations relate to proactively protecting service endpoints based on deep learning of user location and access patterns. A machine-learning model is trained to recognize anomalies in access patterns relating to endpoints of a cloud-based service by capturing metadata associated with user accesses. The metadata for a given access includes information regarding a particular user that initiated the given access, a particular device utilized, a particular location associated with the given access and specific workloads associated with the given access. An anomaly relating to access by a user to a service endpoint is identified by monitoring the access patterns and applying the machine-learning model to metadata associated with the access. Based on a degree of risk to the cloud-based service associated with the identified anomaly, a mitigation action is determined. The cloud-based service is proactively protected by programmatically applying the determined mitigation action.

Application Migration

In some examples, a system may include a processing resource and a memory resource. The memory resource may store machine-readable instructions to cause the processing resource to create a migration plan, defining characteristics of a migration of an application from a native computing data center to a computing cloud of a plurality of distinct candidate computing clouds, based on: a first migration constraint for the application determined from an analysis of historical behavior of the application executed on the native computing data center; a second migration constraint for the application determined from an analysis of administrator cloud migration preferences for the application; and a cloud computing characteristic of each of the plurality of distinct candidate computing clouds.

Resource Directory

In one implementation, a system for providing resource information can comprise a spider engine, a discovery engine, an ontology engine, and a publication engine. The spider engine can crawl a resource provider. The discovery engine can discover a resource of a resource provider. The ontology engine can map the resource to an ontology term. The publication engine can provide the ontology term as part of a resource directory. In another implementation, a method for providing resource information can comprise configuring a crawler with an ontology and an interface associated with a resource provider, crawling a network of storage mechanisms, mapping a resource of the network of storage mechanisms to the ontology, and providing a directory of available resources.

Scalable Cloud Storage Solution

In one implementation, a system for providing resource information can comprise a spider engine, a discovery engine, an ontology engine, and a publication engine. The spider engine can crawl a resource provider. The discovery engine can discover a resource of a resource provider. The ontology engine can map the resource to an ontology term. The publication engine can provide the ontology term as part of a resource directory. In another implementation, a method for providing resource information can comprise configuring a crawler with an ontology and an interface associated with a resource provider, crawling a network of storage mechanisms, mapping a resource of the network of storage mechanisms to the ontology, and providing a directory of available resources.

Hot-spare pool of heterogeneous server(s)

In one implementation, a system for providing resource information can comprise a spider engine, a discovery engine, an ontology engine, and a publication engine. The spider engine can crawl a resource provider. The discovery engine can discover a resource of a resource provider. The ontology engine can map the resource to an ontology term. The publication engine can provide the ontology term as part of a resource directory. In another implementation, a method for providing resource information can comprise configuring a crawler with an ontology and an interface associated with a resource provider, crawling a network of storage mechanisms, mapping a resource of the network of storage mechanisms to the ontology, and providing a directory of available resources.

bottom of page