Evaluating Container Deployment Implementations for Foglets
Abstract
In recent years, the number of devices connected to local networks has rapidly expanded to create a new internet known as The Internet of Things. The applications run on these devices often require lower latency solutions than cloud computing can provide in order to perform time-sensitive interactions with other devices near the network’s edge. One solution to this problem is fog computing, a geo-distributed architecture that provides computational resources closer to the edge of the network. This proximity yields low-latency connections among such devices. In order to implement a powerful fog computing network, applications must be able to deploy and migrate quickly throughout the geo-distributed resources. In the Foglets project, containers are used to efficiently deploy applications. The Foglets project currently contains two platforms that handle container deployment: one that utilizes system calls, and another that uses the well-established Docker API. In this work, we evaluate the latency and throughput of the two deployment platforms, as well as the impact of container commands and size on these metrics. We found that while serving many simultaneous deployments through multithreading, the Docker API yields lower latency and higher throughput. We also found that the size of the container and commands run on the container had a negligible impact on the deployment’s latency and throughput.