This blog is now on ASP.NET Core

This blog is a static website compiled using Hugo. Up to this point, I built the website and packaged all of the assets into a Docker container with NGINX which was hosted on my dedicated server cluster.

This worked well and was simple, but I have an upcoming project that I’ll be announcing soon that required dynamic content that nginx + pure static files wasn’t easily able to implement with NGINX.

To fix this, I decided to migrate this blog from NGINX to ASP.NET Core. Here’s how and why.

Background

I’m still using Hugo to generate all HTML files in my blog from the raw Markdown files just like any other static website. I make changes, commit them to GitHub, then GitHub Actions runs hugo, builds the .net application, and packs the web assets into the image.

Why

Why C#? I like C# and .net and wanted to see how .net has evolved over since I last used it.

Why replace NGINX? My next project, to implement ActivityPub, required some dynamic features that were difficult to handle in NGINX. For example, I need to send push notifications on new posts and process inbox messages, NGINX can’t do that. I experimented with using the nginx-lua-module and implement the logic in Lua, but my use case was more complex. I also looked at Varnish which supported more imperative style response handling, but also lacked some features. Ultimately, I had a prototype that used NGINX, but the resulting architecture was more complex.

Static Files

First thing we do is serve files located in the /app/wwwroot folder (this is where we’ll put the Hugo output files.)

1
2
3
4
5
6
7
var app = builder.Build();
// ...

app.UseDefaultFiles(); // When a user requests /foobar/, serve up /foobar/index.html
app.UseStaticFiles();

app.Run();

The Dockerfile looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
FROM hugomods/hugo:0.123.0 AS hugobuild

COPY . /src
RUN HUGO_ENVIRONMENT=production hugo --minify -b https://www.technowizardry.net

FROM mcr.microsoft.com/dotnet/runtime:8.0 AS base
USER app
WORKDIR /app

FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
RUN apt-get update \
    && apt-get install -y --no-install-recommends \
    clang zlib1g-dev
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["http-server/http-server.csproj", "."]
RUN dotnet restore "./http-server.csproj"
COPY activity-publisher/ .
WORKDIR "/src/."
RUN dotnet build "./http-server.csproj" -c $BUILD_CONFIGURATION -o /app/build

# This stage is used to build the AOT-compiled output
FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "./http-server.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=true

FROM ${FINAL_BASE_IMAGE:-mcr.microsoft.com/dotnet/runtime-deps:8.0} AS final
WORKDIR /app
COPY --from=publish /app/publish /app/
COPY --from=hugobuild /src/public /app/wwwroot/
ENTRYPOINT ["./http-server"]

Compressing static files

That’s easy, but the next problem is supporting compression. With NGINX, I can pre-compress files in the Docker image, then use the gzip_static option to serve that to clients. ASP.NET Core does support response compression, but the response gets compressed dynamically for every request, instead of using pre-compressed files. Compressing the file every time a client fetches it even though it never changes is wasted effort. Let’s do better.

I will support both gzip, which is the most common compression encoding, and brotli, a newer compression encoding that better compresses files.

Generating the files

First thing, we need to generate the pre-compressed files in the Dockerfile so they can be found. The command I used below compresses anything over 1KiB because smaller files often times don’t compress enough to be worth the overhead. I also exclude media files because they already have their own compression applied that won’t benefit from extra compression.

Here’s what the Dockerfile looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
FROM hugomods/hugo:0.123.0 AS hugobuild

COPY . /src
RUN HUGO_ENVIRONMENT=production hugo --minify -b https://www.technowizardry.net

# First, compress using gzip
RUN find /src/public ! -name '*.png' ! -name '*.jpg' ! -name "*.mp4" -size +1k -type f -print -exec gzip -k -f "{}" \;

# Then compress using brotli
FROM alpine:3 AS brotlibuild
RUN apk update && apk add --upgrade brotli
COPY --from=hugobuild /src/public /src/public/
RUN find /src/public ! -name "*.gz" ! -name "*.png" ! -name "*.jpg" ! -name "*.mp4" -size +1k -type f -print -exec brotli "{}" \;

# ...
COPY --from=brotlibuild /src/public /app/wwwroot/
ENTRYPOINT ["./http-server"]

Serving the files

Next, we need to look at every inbound HTTP request and see if the client passed in an Accept-Encoding header (MDN Web Docs). If specified, one or more values can be passed, but we’re interested in gzip, br, and identity.

This check will be added to the ASP.NET request processing chain. ASP.NET came with a class (DefaultFilesMiddleware.cs) that implemented part of what I wanted, but it didn’t support checking for .gz or .br pre-generated files.

Let’s break this down. The first chunk is just initialization. The _next delegate represents the next middleware in the chain. In this case, it’ll be the StaticFilesMiddleware. The _fileProvider provides a handle to the directory that contains our static files. When running in Docker, it’ll be /app/wwwroot.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
using Microsoft.Extensions.FileProviders;
using Microsoft.Extensions.Options;

public class CustomDefaultFileMiddleware
{
  private readonly RequestDelegate _next;
  private readonly IFileProvider _fileProvider;

  public CustomDefaultFileMiddleware(RequestDelegate next, IWebHostEnvironment hostingEnv, IOptions<StaticFileOptions> options)
  {
    // next middleware in the chain
    _next = next;
    // The file provider knows where to find the source files
    _fileProvider = options.Value.FileProvider ?? hostingEnv.ContentRootFileProvider;
  }

Next, this method is called for every request. Only GET/HEAD requests should be handled. POSTs/PUTs, etc. should never touch a static file.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
  public Task Invoke(HttpContext context)
  {
    if (context.GetEndpoint()?.RequestDelegate is null
      && (context.Request.Method == "GET" || context.Request.Method == "HEAD"))
    {
      var subpath = context.Request.Path;

      // Add a slash at the end if it doesn't exist
      // This ensures that when we append the file name, it's a valid path
      // TODO: Consider doing a client redirect here instead to ensure
      // the user is always on a consistent location
      if (!subpath.Value!.EndsWith("/"))
      {
        subpath += new PathString("/");
      }

Next, we check to see if the folder exists. If the client passes in an Accept-Encoding header, we scan to see if we support any of the encodings and have a pre-compressed file on disk. We then set the path and, if applicable, a response Content-Encoding header and pass it along to the next middleware to handle.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
      var dirContents = _fileProvider.GetDirectoryContents(subpath.Value!);
      if (dirContents.Exists)
      {
          var acceptEncoding = context.Request.Headers.AcceptEncoding;
          foreach (var encoding in acceptEncoding)
          {
              // We use StartsWith because request encodings can include a ";q=0.5" to specify quality
              // but we ignore that
              if (encoding.StartsWith("br") && FileExists(context, subpath.Value!, fileToFind + ".br"))
              {
                  fileToFind += ".br";
                  context.Response.Headers.ContentEncoding = "br";
                  break;
              }
              else if (encoding.StartsWith("gzip") && FileExists(context, subpath.Value!, fileToFind + ".gz"))
              {
                  fileToFind += ".gz";
                  context.Response.Headers.ContentEncoding = "gzip";
                  break;
              }
          }
          // Match found, re-write the url. A later middleware will actually serve the file.
          context.Request.Path = new PathString(subpath + fileToFind);
      }
    }

    return _next(context);
  }
}

Then replace the previously used app.UseDefaultFiles, with our custom middleware:

1
2
3
4
5
6
7
var app = builder.Build();

// app.UseDefaultFiles()
app.UseMiddleware<CustomDefaultFileMiddleware>();
app.UseStaticFiles();

app.Run();

Response headers

But wait, there’s a problem. Opening up my website in a web browser causes me to download the page instead of viewing it. Looking at the response, the Content-Type response header shows a gzip MIME type vs HTML.

1
2
3
4
5
6
7
8
GET / HTTP/1.1
Host: localhost:1313
Accept-Encoding: gzip, deflate, br

HTTP/1.1 200 OK
Content-Encoding: gzip
Content-Length: 4499
Content-Type: application/x-gzip

This is happening because our CustomDefaultFileMiddleware says “hey serve up the index.html.gz file please” and the StaticFileMiddleware says “okay here we go, btw the mime type for .gz is application/x-gzip.” The correct HTTP implementation is to pass the MIME type as text/html, the MIME type of the un-encoded file.

This can be done by adding an OnPrepareResponse on the UseStaticFiles step that corrects the response MIME type.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
// app.UseStaticFiles()
var contentTypeProvider = new FileExtensionContentTypeProvider();
app.UseStaticFiles(new StaticFileOptions
{
  ContentTypeProvider = contentTypeProvider,
  OnPrepareResponse = (context) => {
    if (context.Context.Response.Headers.ContentEncoding.Count > 0)
    {
      var subGz = context.Context.Request.Path.Value[..^3];
      if (contentTypeProvider.TryGetContentType(subGz, out var contentType))
      {
        context.Context.Response.Headers.ContentType = contentType;
      }
    }
  }
});

Now, the response Content-Type header shows the MIME type of the file being served and everything works.

1
2
3
4
5
6
7
8
GET / HTTP/1.1
Host: localhost:1313
Accept-Encoding: gzip, deflate, br

HTTP/1.1 200 OK
Content-Encoding: gzip
Content-Length: 4499
Content-Type: text/html

Cache-Control

Next up, we need to allow client caching. By default, no caching headers are being set to the client, so nothing is being cached and the client has to make a request to the server for every single file. Caching increases browsing performance and is controlled through the Cache-Control header (MDN docs).

In my blog, I only set caching headers for static assets that are hashed, like my CSS, JS, and images. Other content, such as HTML, will not be cached to ensure clients revalidate and see if there’s any new content.

To make this happen, we need yet another middleware handler. We’ll add this to the OnPrepareResponse

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
private readonly string MAX_CACHE_HEADER = new CacheControlHeaderValue()
  {
    Public = true,
    MaxAge = TimeSpan.FromDays(365 * 10) // 10 years
  }.ToString();

// ...

var contentTypeProvider = new FileExtensionContentTypeProvider();
app.UseStaticFiles(new StaticFileOptions
  {
    ContentTypeProvider = contentTypeProvider,
    OnPrepareResponse = (context) => {
      var response = context.Context.Response;
      var workingPath = context.Context.Request.Path.Value;
      if (response.Headers.ContentEncoding.Count > 0)
      {
        workingPath = workingPath[..^3];
        if (contentTypeProvider.TryGetContentType(workingPath, out var contentType))
        {
          response.Headers.ContentType = contentType;
        }
      }
      if (   urlPath.StartsWithSegments("/scss")
        || urlPath.StartsWithSegments("/ts")
        // Don't cache SVGs because Hugo doesn't hash the file name
        // If we fix that, we can check the mime type
        || workingPath[^4..] == ".png")
      {
        response.Headers.CacheControl = MAX_CACHE_HEADER;
      }
    }
  });

With that, we now have caching headers set and clients won’t refetch static content.

1
2
3
4
5
6
7
8
GET /ts/main.abc.js HTTP/1.1
Host: localhost:1313
Accept-Encoding: gzip, deflate, br

HTTP/1.1 200 OK
Content-Length: 7835
Content-Type: text/javascript
Cache-Control: public, max-age=315360000

Conclusion

With these changes, I now have a fully equivalent replacement for my previous NGINX web server. I had to implement code to select which default file to return when a user requests /foobar/ and support serving pre-compressed static files which required code to fix MIME types. This was trivially handled by NGINX with the gzip_static on directive. I also had to add cache control headers, also handled in one line with NGINX’s expires max directive.

Looking at this comparison in isolation, I ended up with more code compared to what NGINX already handles out of the box. However, I did this not just for the sake of change, but because when I implemented ActivityPub, I built on-top of this server and implemented features that were not easy with NGINX.

Copyright - All Rights Reserved

Comments

Comments are currently unavailable while I move to this new blog platform. To give feedback, send an email to adam [at] this website url.