There is an increasing adoption of software applications and services, and some of them, for example generative AI, are quite energy intensive. This leads to an even increasing operational greenhouse gases emissions. An approach to the reduction of operational emissions is done by consolidating many applications in big datacenters with more efficient solutions and electricity provided by clean electric sources. However, in the perspective of EDGE computing, this approach is only partially possible, leaving space and need for software optimization to reduce greenhouse gases in the operational phase. From this perspective such optimization is made by reducing electric energy consumption, assuming that the energy mix and its conversion factor is fixed. In a cloud environment, where, by virtualization, many applications share the same server, it is impossible to take a direct measure of energy consumption. The only way to do it is by estimation, using a model that links electric energy consumption to computing load parameters. This work focuses on the parameters that can be used for building a model to estimate the electric energy consumption of a server running applications under a kubernetes orchestrated environment. We built a test set made by a hardware energy meter and a software ICT load generator to explore the measurability and relevance of different parameters like % CPU usage, frequency scaling, CPU capping. The result was that the main energy consumption of a server is the idle part, consistent with literature, and that we can use instructions executed per second as main driver for dynamic energy consumption, with acceptable imprecision and linearity.
Software CPU energy efficiency in a small cloud with orchestrated environment
Giovanni Franza;Giovanna Sissa
2026-01-01
Abstract
There is an increasing adoption of software applications and services, and some of them, for example generative AI, are quite energy intensive. This leads to an even increasing operational greenhouse gases emissions. An approach to the reduction of operational emissions is done by consolidating many applications in big datacenters with more efficient solutions and electricity provided by clean electric sources. However, in the perspective of EDGE computing, this approach is only partially possible, leaving space and need for software optimization to reduce greenhouse gases in the operational phase. From this perspective such optimization is made by reducing electric energy consumption, assuming that the energy mix and its conversion factor is fixed. In a cloud environment, where, by virtualization, many applications share the same server, it is impossible to take a direct measure of energy consumption. The only way to do it is by estimation, using a model that links electric energy consumption to computing load parameters. This work focuses on the parameters that can be used for building a model to estimate the electric energy consumption of a server running applications under a kubernetes orchestrated environment. We built a test set made by a hardware energy meter and a software ICT load generator to explore the measurability and relevance of different parameters like % CPU usage, frequency scaling, CPU capping. The result was that the main energy consumption of a server is the idle part, consistent with literature, and that we can use instructions executed per second as main driver for dynamic energy consumption, with acceptable imprecision and linearity.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



