Green Software Alignment Strategy - rollthecloudinc/hedge GitHub Wiki
This document will outline how HEDGE currently aligns with the green principles and actions that can be taken to make the platform even more green friendly based on these principles.
https://principles.green/principles/applied/n-tier/
Optimize your network traffic
Reduce the amount of traffic your architecture creates per operation as well as the distance each request and response travels.
Consider using caching headers, which allows browser caches and proxy caches to have enough information to confidently cache static assets. Caching static assets at the browser or proxy level allows future requests for those assets to be handled by those caches and reduces network traffic to your application.
Network traffic is eliminated by compiling page configuration directly into source code as JSON objects during the build process. Frequently changing data sets like those for posts on a social media site can either be updated as part of the app assets or included in a separate repo or CDN.
Consider using a CDN to distribute your application's static assets closer to the source of a request. This distribution of assets reduces the distance all requests for static assets has to travel over the network.
CDN is built into the architecture of HEDGE at both data storage and hosting level. Data is stored in natural form using JSON on a CDN as a flat file. Media assets are also uploaded to a CDN or Github for storage. Lastly, HEDGE web experiences can be deployed as static sites on a CDN or Github
Where possible, reduce the size and optimize your bundles and static assets. Consider using compression and decompression for data you transmit over the network. Compression and decompression is usually takes less overall energy than transmitting uncompressed data over the network.
JSON will be stored gzipped on the cloud and future media optimization made available.
Increase your compute utilization
Update your workload distribution and compute resources so that you use less resources at a higher utilization. This reduces the amount of energy your compute resources spend in an idle state, or using energy without doing work.
If using virtual machines for compute resources and they have low utilization, consider reducing the size of those virtual machines to increase utilization. Smaller virtual machines with higher utilization usually use less energy than larger virtual machines with lower utilization given the same workload.
HEDGE does away with all this mess. Managing ec2 instances, Kubernetes clusters its all a massive waste. Instead HEDGE promotes cloud purity without middle men in between. This is one of key methodologies that enables web experiences to be built as light weight as possible. Instead a focus is placed on ui and user experience rather than building middle men to manage data management workflows.
Evaluate migrating your workload to a PaaS where possible. Typically, PaaS solutions are sized more appropriately for their workload and can run those workloads at a high utilization on their underlying compute resources.
HEDGE cloud will be a PaaS solution that enables a large ecosystem of independent web experiences to share the same cloud resources. In this way the energy cost of web experiences can be greatly reduced without sacrificing latency, availability, scalability, responsiveness.
Consider using auto-scaling or burst capabilities for your compute resources over statically allocating compute resources for maximum capacity at all times. These capabilities allow you to increase and decrease your compute resources based on demand while keeping the utilization high on those compute resources.
HEDGE promotes using alternative cloud friendly services. Ideally those that auto scale and/or serverless. Services like Athena are being considered in place of Open Search considering its serverless nature. However, open search that runs on ec2 instances is still a very powerful search solution that is difficult to overlook and still aligns with the green software based its scalability between multiple independent apps. Search can also be achieved on AWS Opensearch serverless for increased sustainability and resource neutralization.
If you have many logical layers in a physical tier, consider increasing your physical tiers and reorganizing where your logical layers run. The increased physical tiers with a more granular organization of logical layers allows you more flexibility to scale the logical layers independently. This flexibility allows you to keep utilization high on your compute resources and avoid idle logical layers.
Responsible use of cloud resources ideally serverless.
Optimize your database
Optimizing which database you use as well as how the data is stored can reduce the energy used to run the database as well decrease idle time waiting for queries to complete.
Ensure you are using the best database for interacting with your data set. For example, if you are running many relational queries on your data set, a relational database is better suited and likely more efficient to use than NoSQL database.
HEDGE does away with draconian clunky relational databases and the inefficient queries closely tied to that architecture at scale.
If no single database is designed to handle all the ways you interact with you data set, consider keeping redundant copies of your data in different databases and using each database for the subset of interactions best suited for that database.
HEDGE embraces this philosophy as it relates to leveraging multi-cloud.
Consider using index if your database offers it.
No relational databases are currently directly supported unless integrating with custom rest api. However, HEDGE takes a hard stand against relational databases.
Consider evaluating and optimizing your queries.
Considerations to follow.
Consider using a database cache. In some cases, caching can reduce redundant queries to the database and decrease energy usage by the database, especially for complex or compute-intensive queries.
Considerations to follow.