FaaS is really serverless
1670198400

If you want to get to writing business logic as quickly as possible and you don’t want to be bogged down by scaffolding—whether it’s managing servers or building container images, then Functions as a Service (FaaS) is for you



I’ll go so far as to say you’ll probably not have to use or worry about the HTTP routing component of your favourite web framework (I have yet to find a use for it and the mapping between request URIs and code is managed one layer back) so this means FaaS makes it possible to manage even less code


I use GCP as my cloud provider so Google’s answer to FaaS is Cloud Functions. AWS has Lambda but it does seem like a lot of work getting PHP in as a runtime so to hell with that. Azure has Functions but it’s also a massive pain-in-the-ass to get PHP in as a supported language so I’m really glad Google has no language bias in Cloud Functions


Functions-as-a-Service (FaaS) is at the top of the processing abstraction model

The higher you go in the processing abstraction model, the less operational work you’ll need to be involved with and the more hidden you’ll be from the implementation details of the layers below


Being abstracted like this is a really good idea because it will allow you to focus on writing application logic instead of spending time dealing with annoying operational issues


Think of the abstraction model like this:



Why are the layers below not so great?

Because they don’t contribute to the product you are building


Code contributes to the product. Well expressed algorithms, well organised data structures and features the customer requested contribute to the product. Spending days—or even worse, weeks—configuring and deploying the scaffolding layers contribute absolutely nothing to the product


You don’t sell a virtual machine or operating system package and you don’t get any achievement badge after you have spent weeks configuring operating systems when there are bugs in the codebase or the product is missing important features


I’m very aware that code does not run on water vapour so it’s a good idea to shift this responsibility, the responsibility of keeping the lights on, to the cloud so you can focus on building the core product


Only if it’s not your job

If your job is to manage hardware (a datacenter technical for example) then managing hardware is fine for you. If your job is to manage operating systems (a sysops engineer for example) then managing operating systems is fine for you


I’m actually referring to people that are tasked with writing business or application logic and the company/startup does not want to scale headcount, for doing non-code work, internally—which sort of makes sense if you think about it


If the company is operating at a larger scale and increasing headcount is not an issue or if the company requires the additional control of the layers below to fulfil the requirements of PCI DSS or some other ISO specification, then this blog post is not aimed at you—it’s also not to say that FaaS can’t satisfy PCI DSS because it clearly can, I’m just not sure who would actually go through the auditing process but this is off-topic


With FaaS, literally everything is already taken care of


Focus exclusively on code

If you go down just a single layer below FaaS–the layer where containerisation or application virtualisation exists, you’ll still need to do non-code work like building container images and managing entry point scripts. This “non-code work” gets worse if you keep going downwards


I’ve run into issues with Docker containers that used entrypoint scripts that just took too long to stand up the container. They would run through a whole lot of conditional logic and eventually execute something like Supervisor, that started up the application stack, but over this duration a timeout error occurred. What I did to correct this was write my own entrypoint shell script that stood up the application stack a lot quicker but this all took time away from writing application logic. You can see tell-tale signs of my work here, under the “Zero-cost asynchronous worker” section


With FaaS, you exclusively focus on code and I shouldn’t need to do any more scaffolding work again in future. I have yet to run into a scenario that requires anything else other than writing code


Provide in a request object, return a response object

The function you define needs to take in a parameter that represents the HTTP request and return a value that represents the HTTP response. This is pretty much the only strict requirement when it comes to FaaS (the outside caller is expecting this behaviour). There is a slight exception to this with PHP–probably the other languages too–as the runtime and the return type of the entrypoint function: you can return either a object that implements ResponseInterface or a string (I’d define a union type in this case) as the HTTP response body


You can either write application logic right here in the function or be smart about it and use the function as the front controller


Invoked by an HTTP request on a public endpoint

FaaS will work perfectly as your web app's stateless backend API because the function can be invoked by an HTTP request on a publicly accessible endpoint. This is true for Google Cloud Functions and AWS Lambda but I’m sure Azure Functions is exactly the same


This publicly accessible endpoint is provided easily to you during the deployment of Cloud Functions (it’s an obfuscated hostname in the cloudfunctions.net DNS zone). You can also use Cloud Functions for the backend processing on a global HTTP load-balancer but this will start invoking extra cost. From what I can tell, the exact logic applies to the other cloud providers FaaS solutions


There are other invocation mechanisms (trigger by an internal event) but I haven’t explored any of those yet


Scales perfectly with zero effort

Autoscaling is native to the cloud and with autoscaling in FaaS, you don’t have to build or prepare base operating system images for something like scaling horizontally in Compute Engine or EC2, or preparing worker nodes images for Kubernetes or anything like this


All you need to do is set the minimum and maximum number of instances and you shouldn’t have to worry about any scaling issues. You can also scale to zero so the bill should be relatively low


It’s also important to note that absolutely no state (no server state, no persistence on the local file system etc) is maintained between the current invocation and previous invocations of the app. It’s very much “process incoming request, return response, collapse” and this makes scaling easy to think about


Example code

Google Cloud Functions support a variety of runtimes and my favourite being PHP so it’s apt that I use code written in PHP as an example


use GuzzleHttp\Psr7\Response;
use Psr\Http\Message\ResponseInterface;
use Psr\Http\Message\ServerRequestInterface;

function doSomethingAwesome(ServerRequestInterface $request): ResponseInterface
{
    return new Response(200, array(), 'Hello World!');
}


Here you can see my entrypoint function is aptly named doSomethingAwesome(). It’s type-hinting using the interfaces that are provided by the PSR HTTP Message Library (which is not an implementation but rather a collection of interfaces). This parameter type is mandatory but it’s up to you to decide what implementation you are going to use


Guzzle is an implementation of the PSR HTTP Message Library so it provides really useful objects. I chose Guzzle because it was shipped in as a dependency to google/cloud-functions-framework but I’m sure there are other implementations out there you can use if you don’t like Guzzle


In my example above, I’m returning the Response object (that implements ResponseInterface) and I’m providing an HTTP status code, an empty array of HTTP headers and the HTTP response body to the constructor (all these parameters on the constructor have defaults that I’m overwriting). And just like magic, it all works perfectly


The Guzzle library includes a whole lot of other really useful objects. Here are just two as examples:



This is only the beginning

Over the years, I’ve gone from deploying my apps into static Linode server infrastructure to containerising all my apps and deploying them into Google Cloud Run, but I’m not going to stop at serverless containers and the reason is because FaaS is an order of magnitude less work


All my app backends are written in PHP and use Composer’s PSR-4 class autoloader so I feel it shouldn’t be difficult to migrate all my apps to Google Cloud Functions. The PSR-4 standard forces the appropriate name spacing and layout so my custom front controllers can be replaced by the entry point function and in the script that defines the function, I’ll require the auto loader


It’s easy to get running locally

Development environments usually ship with a lot of opinion but getting Cloud Functions with PHP running locally is really simple


composer.json

{
    "require": {
        "google/cloud-functions-framework": "^1.1",
        "psr/http-message": "^1.0"
    },
    "scripts": {
        "start": [
            "Composer\\Config::disableProcessTimeout",
            "FUNCTION_TARGET=doSomethingAwesome php -S localhost:${PORT:-8080} vendor/bin/router.php"
         ]
     }
}


Here you can see FUNCTION_TARGET being set to the name of my entry point function and then invoking PHP on the CLI and binding the development server to a local port (TCP/8080)


The PHP development server deserves a lot of credit here. This is great because I don’t have to have anything else running when I’m working with basic functions like this (of course my relational database and in-memory caching run locally elsewhere)


composer run start

PHP 7.3.33 Development Server started at Sun Dec  4 21:50:56 2022
Listening on http://localhost:8080
Document root is /Users/bruce/Documents/GitHub/awesome-app
Press Ctrl-C to quit.


Now I’m able to send HTTP requests to the local development server which will in-turn invoke my entry point function


Messages provided to error_log() will be output directly to the terminal. I haven’t had any formatting issues when I serialise an object, with print_r(), and provide it to error_log()


Trade-offs


Might not be able to satisfy strict compliance standards easily

The higher you go in the abstraction model, the less control you have on the layers below. This is not inherently a bad thing but it may become a problem if you have strict requirements in areas like networking or some security standard


I’ve worked in an enterprise environment where people have the strangest opinions. One of these strange opinions is that certain infrastructure resources need to be on specific IP addresses. I never understood the reasoning behind this. My argument was: if the IP address is routable from the internet and the request does not contain valid authentication credentials/headers then this service is not to be considered public. Some people don’t agree and in this case, I’m not sure if you have control over what IP address the FaaS is accessible on


Failed first request due to slow cold starts if you scale to zero

When the instance has scaled to zero, the request may experience significant latency while the code execution environment is dynamically provisioned. I’ve mentioned this before. It’s also not inherently an issue--it’s more how you handle the delayed request than anything else


What I’ve experienced, as well, is the first HTTP connection, that invokes the cold start, will be kept open until the instance becomes available and the response is sent out. Google does not seem to drop the connection–which is good! Your app will just have to handle the initial delay correctly