Browse Source

wip

pull/1/head
rra 6 years ago
parent
commit
67b688c43e
  1. 1
      raw/nas.md
  2. 114
      raw/solar.lowtech.md

1
raw/nas.md

@ -4,6 +4,7 @@ Category: storage
Tags: NAS, diy, vpn
Slug: network-backups-over-vpn
Description: How to set up a spare olimex board as a networked backup disk
Status:draft
#Introduction

114
raw/solar.lowtech.md

@ -1,14 +1,13 @@
Title: a low tech solar powered server
Title: A Low-Tech Solar Powered Server
Date: 2018-9-08
Category: hosting
Category: solar server
Tags: solar power, static sites, energy optimization
Slug: solar-powered-server
Description: Optimizing a website and server hardware for low energy and solar power.
Status: Draft
Description: Optimizing a website and server hardware for low energy and solar power.
Author: Roel Roscam Abbing
Status: draft
#Introduction
wie wat waarom, verwijzing naar lowtechmagazine.com
[TOC]
Earlier this year we've been asked to help redesign the website of <lowtechmagazine.com>, the primary goal was to radically reduce the energy use associated with accesing their content and to stay true to the idea of low-tech.
@ -18,10 +17,11 @@ In this particular case it means that all the optimizations and increases in mat
Concretely this meant making a website and server which could be hosted from the author's off-grid solar system. <https://solar.lowtechmagazine.com/about/> gives more insights into the motivations on making a self-hosted solar-powered server, this companion article on <homebrewserver.club> will show you how to set up the server.
A low-tech website is one
* that has a minimal size
* supports older computers and slower networks
* improves portability and archiveability of the content
A low-tech website is one:
- that is minimalist in size and its requirements
- that supports older computers and slower networks
- that improves portability and archiveability of the content
## Software
@ -31,18 +31,18 @@ The main change in the webdesign was to move from a dynamic website based typepa
![Image from the blog showing 19th century telephone switchboard operators, 159.5KB](/images/international-switchboard.jpg)Image from the blog showing 19th century telephone switchboard operators, 159.5KB
One of the main challenges was to reduce the overal size of the website, with an aim to reduce the size of each page to 1MB. Since a large part of both the appeal and the weight of the magazine comes from the fact it is richely illustrated with images, this presented us with a particular challenge.
One of the main challenges was to reduce the overal size of the website. Particularly to try and reduce the size of each page to something less than 1 Mega Byte. Since a large part of both the appeal and the weight of the magazine comes from the fact it is richly illustrated, this presented us with a particular challenge.
### Image compression
In order to reduce the size of the images, without diminishing their role in the design and the blog itself, we reverted to a technique called dithering.
In order to reduce the size of the images, without diminishing their role in the design and the blog itself, we reverted to a technique called dithering:
![The same image but dithered with a 3 color palette](/images/international-switchboard3.png)The same image but dithered with a 3 color palette, 36.5KB
This is a technique ['to create the illusion of "color depth" in images with a limited color palette'](https://en.wikipedia.org/wiki/Dither#Digital_photography_and_image_processing). It based on the [print reporoduction technique called halftoning](https://en.wikipedia.org/wiki/Halftone). Dithering, or [digital half-toning](http://www.efg2.com/Lab/Library/ImageProcessing/DHALF.TXT), was widely used in videogames and pixel art when a limited amount of video memory constrained the available colors. In essence it uses optical illusions to simulate more colors. These optical illusions are broken however by the distinct and visible patterns that the dithering algorithms generate.
This is a technique 'to create the illusion of "color depth" in images with a limited color palette'[^illusion]. It based on the print reporoduction technique called [halftoning](https://en.wikipedia.org/wiki/Halftone). Dithering, or digital half-toning[^digitalhalftone], was widely used in videogames and pixel art at a time when a limited amount of video memory constrained the available colors. In essence dithering relies on optical illusions to simulate more colors. These optical illusions are broken however by the distinct and visible patterns that the dithering algorithms generate.
![Dithered with a six tone palette](/images/international-switchboard6.png)Dithered with a six tone palette, 76KB
As a consequence most of the effort and literature on dithering is around limiting the 'banding' or visual artifacts using increasingly complex dithering algorithms[^dithering]. Our design instead celebrates the visible patterns introduced by the technique. Coincidentally the 'Bayesian Ordered Dithering' algorithm we use which introduces these distinct visible patterns is also quite a simple and fast algorithm.
As a consequence most of the effort and literature on dithering is around limiting the 'banding' or visual artifacts by employing increasingly complex dithering algorithms[^dithering]. Our design instead celebrates the visible patterns introduced by the technique. Coincidentally, the 'Bayesian Ordered Dithering' algorithm that we use not only introduces these distinct visible patterns but it is also quite a simple and fast algorithm.
![Dithered with an eleven tone color palette](/images/international-switchboard11.png)Dithered with an eleven tone palette, 110KB
@ -67,24 +67,21 @@ As a webserver we use [NGINX](https://www.nginx.com/) to serve our static files.
To test some of the assumptions we've done some measurements using a few different articles. We've used the following pages:
`FP` = [Front page](https://solar.lowtechmagazine.com), 404.68KB, 9 images
* FP = [Front page](https://solar.lowtechmagazine.com), 404.68KB, 9 images
* WE = [How To Run The Economy On The Weather](https://solar.lowtechmagazine.com/2017/09/how-to-run-the-economy-on-the-weather/), 1.31 MB, 21 images
* HS = [Heat Storage Hypocausts](https://solar.lowtechmagazine.com/2017/03/heat-storage-hypocausts-air-heating-middle-ages/), 748.98KB, 11 images
`WE` = [How To Run The Economy On The Weather](https://solar.lowtechmagazine.com/2017/09/how-to-run-the-economy-on-the-weather/), 1.31 MB, 21 images
* FW = [Fruit Walls: Urban Farming in the 1600s](https://solar.lowtechmagazine.com/2015/12/fruit-walls-urban-farming/), 1.61MB, 19 images
`HS` = [Heat Storage Hypocausts](https://solar.lowtechmagazine.com/2017/03/heat-storage-hypocausts-air-heating-middle-ages/), 748.98KB, 11 images
* CW = [How To Downsize A Transport Network: Chinese Wheelbarrows](https://solar.lowtechmagazine.com/2011/12/the-chinese-wheelbarrow/), 996.8KB, 23 images
`FW` = [Fruit Walls: Urban Farming in the 1600s](https://solar.lowtechmagazine.com/2015/12/fruit-walls-urban-farming/), 1.61MB, 19 images
`CW` = [How To Downsize A Transport Network: Chinese Wheelbarrows](https://solar.lowtechmagazine.com/2011/12/the-chinese-wheelbarrow/), 996.8KB, 23 images
The pages which are hosted in Barcelona have been loaded from a machine in the Netherlands. Times are all averages of 3 measurements.
For this test the pages which are hosted in Barcelona have been loaded from a machine in the Netherlands. Times are all averages of 3 measurements.
### Compression
We run gzip compression on all our text based content, which lowers the size of transmitted information at the cost of aslight increase in required processing. The idea behind this is that we know that the energy used on our web server is solar and optimized wheras we don't know that of the rest of the internet's infrastructure. So reducing the amount of data transferred would also reduce the total environmental footprint.
### Compression of transmitted data
We run gzip compression on all our text-based content, this lowers the size of transmitted information at the cost of a slight increase in required processing. By now this is common practice in most web servers but we enable it explicitly. Reducing the amount of data transferred will also reduce the total environmental footprint.
:::console
#Compression
@ -95,7 +92,8 @@ We run gzip compression on all our text based content, which lowers the size of
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
A comparison of the amount of data sent with gzip compression enabled or disabled:
|GZIP | MP | WE | HS | FW | CW |
|----------|----------|----------|----------|----------|----------|
@ -107,9 +105,9 @@ We run gzip compression on all our text based content, which lowers the size of
### Caching of static resources
Caching is a technique in which the resources of the web site such as style sheets and images are provided with additional headers which tell the visitor's browser to save a local copy of those files. This ensures that the next time that visitor loads the same page the files are loaded from the local cache, rather than being transmitted over the network again. This not only reduces the time to load the entire page, but also resource usage both on the network and on our server.
Caching is a technique in which some of the site's resources, such as style sheets and images, are provided with additional headers thta tell the visitor's browser to save a local copy of those files. This ensures that the next time that visitor loads the same page, the files are loaded from the local cache rather than being transmitted over the network again. This not only reduces the time to load the entire page, but also lowers resource usage both on the network and on our server.
The common practice is to cache everything except the HTML, so that when the user loads the web page again the HTML will notify the browser of all the changes. However since <lowtechmagezine.com> publishes only 12 articles per year, we decided to set the cache on the HTML to seven days. Meaning it is only after a week that the user's browser will automatically check for new content. The front page will not use caching.
The common practice is to cache everything except the HTML, so that when the user loads the web page again the HTML will notify the browser of all the changes. However since <lowtechmagezine.com> publishes only 12 articles per year, we decided to also cache HTML. The cache is set for 7 days, meaning it is only after a week that the user's browser will automatically check for new content. Only for the front page this is disabled.
:::console
map $sent_http_content_type $expires {
@ -120,10 +118,9 @@ The common practice is to cache everything except the HTML, so that when the use
~image/ max;
}
Concretely this means the following:
The first time the page is loaded (FL) it around one second to fully load the page. The second time however, through the caching of object this time is reduced on average by 40%. These load time are compromised of two things:
* how long it takes to load resources over the network.
* how long it takes the browser to calculate how all resources should be laid out
Concretely this had the following effects:
The first time a page is loaded (FL) it around one second to fully load the page. The second time, however, the file is loaded from the cache and the load time reduced by 40% on average. Since load time are based on the time it takes to load resources over the network and the time it takes for the browser to render all the styling, caching can really decrease load times.
| Time(ms) | FP | WE | HS | FW | CW |
|----------|-------|--------|-------|--------|--------|
@ -131,8 +128,10 @@ The first time the page is loaded (FL) it around one second to fully load the pa
| SL | 660ms | 628ms | 625ms | 788ms | 675ms |
| savings | 34% | 41% | 35% | 50% | 40% |
In terms of data transferred the change is even more radical, essentially meaning that no data is transferred the second time a page is visited.
| KBs | FP | WE | HS | FW | CW |
|----------|----------|-----------|----------|-----------|----------|
| FL | 455.86KB | 1240.00KB | 690.48KB | 1610.00KB | 996.21KB |
@ -140,13 +139,13 @@ In terms of data transferred the change is even more radical, essentially meanin
| savings | >99% | >99% | >99% | >99% | >99% |
In case you want the browser to force downloading all the resources again, you can override the cache settings by doing a 'hard refresh' by pressing `ctrl+r`
In case you want to force the browser to load cached resources over the network again, do a 'hard refresh' by pressing `ctrl+r`
### HTTP2
### HTTP2, a more efficient
Another optimization is the use of [HTTP2](https://http2.github.io/) over HTTP/1.1. HTTP2 is a relatively recent protocol (compared to HTTP/1.1) that increases the transport speed of the data. By compressing the data headers and multiplexing multiple requests into a single TCP connection, it transferres less data and opens less connections.
Another optimization is the use of [HTTP2](https://http2.github.io/) over HTTP/1.1. HTTP2 is a relatively recent protocol that increases the transport speed of the data. The speed increas is the result of HTTP@ compressing the data headers and multiplexing multiple requests into a single TCP connection. To summarize it has less data overhead and needs to opens less connections.
The effect of this is most notable when the browser needs to do a lot of requests, since these can all be fit into a single connection. In our case that concretely means that articles with more images will load slightly faster.
The effect of this is most notable when the browser needs to do a lot of different requests, since these can all be fit into a single connection. In our case that concretely means that articles with more images will load slightly faster over HTTP2 than over HTTP/1.1.
| | FP | WE | HS | FW | CW |
|----------|-------|-------|-------|-------|-------|
@ -155,7 +154,7 @@ The effect of this is most notable when the browser needs to do a lot of request
| Images | 9 | 21 | 11 | 19 | 23 |
| savings | 11% | 21% | 0% | 4% | 18% |
Not all browsers support HTTP2 but the NGINX implementation will automatically serve the files over HTTP/1.1 in those cases.
Not all browsers support HTTP2 but the NGINX implementation will automatically serve the files over HTTP/1.1 for those browsers.
It is enabled at the start of the server directive:
@ -165,9 +164,9 @@ It is enabled at the start of the server directive:
}
### SSL
### Serve the page over HTTPS
Like any modern website we have also implemented SSL to provide Transport Layer Security even though the website has no dynamic functionality. We do this mostly to improve page rankings in search engines.
Even though the website has no dynamic functionality like login forms, we have also implemented SSL to provide Transport Layer Security. We do this mostly to improve page rankings in search engines.
There is something to be said to support both HTTP and HTTPS versions of the website but in our case that would mean more redirects or maintaining two versions
@ -190,23 +189,25 @@ Then we've set up SSL with the following tweaks:
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 180m;
SSL sessions only expire after three hours meaning that while someone browses the website, they don't need to renegotiate the session all the time.
SSL sessions only expire after three hours meaning that while someone browses the website, they don't need to renegotiate a new SSL session all the time:
:::console
# Enable server-side protection against BEAST attacks
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
We use a limited set of modern cryptographic ciphers and protocols:
# Disable SSLv3
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
We use a limited set of modern cryptographic ciphers and protocls.
We tell the visitors browser to always use HTTPS, in order to reduce the amount of 301 redirects, which might slow down loading times:
:::console
# Enable HSTS (https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security)
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
We tell the visitors browser to always use HTTPS, in order to reduce the amounts of 301 redirects, which might slow down the experience.
We enable OCSP stapling which is quick way in which browsers can check whether the certificate is still active without incurring more round trips to the Certificate Issuer. Most tutorials recommend setting Google's `8.8.8.8` and `8.8.4.4` DNS servers but we don't want to use those. Instead we chose some servers provided through <https://www.opennic.org> that are close to our location:
:::console
# Enable OCSP stapling (http://blog.mozilla.org/security/2013/07/29/ocsp-stapling-in-firefox)
@ -216,21 +217,18 @@ We tell the visitors browser to always use HTTPS, in order to reduce the amounts
resolver 87.98.175.85 193.183.98.66 valid=300s;
resolver_timeout 5s;
We enable OCSP stapling which is quick way in which browsers can check whether the certificate is still active without incurring more round trips to the Certificate Issuer. Most tutorials recommend setting Google's 8.8.8.8 and 8.8.4.4 DNS servers but we don't want to use those. Instead we chose some servers provided through <https://www.opennic.org> that are close to our location.
Last but not least we set change the size of the SSL buffer to increase to so-called [Time To First Byte](https://en.wikipedia.org/wiki/Time_to_first_byte) which basically shortens the time between a click and things changing on the screen:
Last but not least, we set change the size of the SSL buffer to increase to so-called 'Time To First Byte'[^TTFB] which essentially shortens the time between a click and things changing on the screen:
:::console
# Lower the buffer size to increase TTFB
ssl_buffer_size 4k;
These SSL tweaks are heavily indebted to these two articles by Bjorn Johansen[^Johansen] and Hayden James[^James]
These SSL tweaks are heavily indebted to these two articles:
* <https://bjornjohansen.no/optimizing-https-nginx>
* <https://haydenjames.io/nginx-tuning-tips-tls-ssl-https-ttfb-latency/>
### Setting up LetsEncrypt for HTTPS
### Setting up LetsEncrypt
The above are all the SSL performance tweaks but we still need to get our SSL certificates. We'll do so using LetsEncrypt[^LE].
First install certbot:
@ -242,13 +240,13 @@ Then run the command to request a certificate using the webroot authenticator:
:::console
sudo certbot certonly --authenticator webroot --pre-hook "nginx -s stop" --post-hook "nginx"
We use the `certonly` directive so it just creates the certificates for us but doesn't touch muh config.
Use the `certonly` directive so it just creates the certificates but doesn't touch muh config.
This will prompt an interactive screen where you set the (sub)domain(s) you're requesting certificates for.In our case that was `solar.lowtechmagazine.com`
This will prompt an interactive screen where you set the (sub)domain(s) you're requesting certificates for. In our case that was `solar.lowtechmagazine.com`.
Then it will ask for the location of the webroot which in our case is `/var/www/html/` it will then give you a certificate.
Then it will ask for the location of the webroot, which in our case is `/var/www/html/`. It will then proceed to generate a certificate.
Then the only thing you need to do in your NGINX config is to specify where your certificates live. This is usually in `/etc/letsencrypt/live/domain.name/`. In our case it is the following:
Then the only thing you need to do in your NGINX config is to specify where your certificates are located. This is usually in `/etc/letsencrypt/live/domain.name/`. In our case it is the following:
:::console
ssl_certificate /etc/letsencrypt/live/solar.lowtechmagazine.com/fullchain.pem;
@ -387,7 +385,13 @@ An increase in traffic for example will have an impact on the amount of energy t
# Webdesign
[^illusion]: <https://en.wikipedia.org/wiki/Dither#Digital_photography_and_image_processing>
[^digitalhalftone]: <http://www.efg2.com/Lab/Library/ImageProcessing/DHALF.TXT>
[^dithering]: See for example <https://web.archive.org/web/20180325055007/https://bisqwit.iki.fi/story/howto/dither/jy/>
[^TTFB]: <https://en.wikipedia.org/wiki/Time_to_first_byte>
[^LE]: <https://letsencrypt.org/>
[^Johansen]:<https://bjornjohansen.no/optimizing-https-nginx>
[^James]:<https://haydenjames.io/nginx-tuning-tips-tls-ssl-https-ttfb-latency/>
# Feedback & contributions
* xmpp chatroom

Loading…
Cancel
Save