Social Media for Exploring Adverse Drug Events Associated with Multiple Sclerosis

Document Type

Conference Proceeding

Publication Date



Application design has been revolutionized with the adoption of microservices architecture. The ability to estimate end-to-end response latency would help software practitioners to design and operate microservices applications reliably and with efficient resource capacity. The objective of this research is to examine and compare data-driven approaches and a variety of resource metrics to predict end-to-end response latency of a containerized microservices workflow running in a cloud Kubernetes platform. We implemented and evaluated the prediction using a deep neural network and various machine learning techniques while investigating the selection of resource utilization metrics. Observed characteristics and performance metrics from both microservices and platform levels were used as prediction indicators. To compare performance models, we experimented with a benchmarking open-source Sock Shop containerized application. A deep neural network technique exhibited the best prediction accuracy using all metrics, while other machine learning techniques demonstrated acceptable performance using a subset of the metrics.

This document is currently not available here.