Serverless Functions, Made Simple.

OpenFaaS® makes it simple to deploy both functions and existing code to Kubernetes

Deploy to production with OpenFaaS Standard and Enterprise

Run anywhere


Deploy your functions on-premises or in the cloud, with portable OCI images.

Any code

Any code

Write functions in any language, and bring your existing microservices along too.

Any scale

Any scale

Pro features scale your functions to meet demand, and down to zero when idle.

Run your code anywhere with the same unified experience.

You can deploy OpenFaaS anywhere you have Kubernetes

Our templates follow best practices meaning you can write and deploy a new function to production within a few minutes, knowing it will scale to meet demand.

OpenFaaS Pro then builds on our Open Source codebase to bring flexible auto-scaling, event-connectors, monitoring, a new dashboard, GitOps and various levels of support.

Reduce Boilerplating

Templates that just work

Get code into production within minutes using our templates, or create your own.

Efficient scaling

Your functions can be fine-tuned to scale to match the type of traffic they receive, including to zero to save on costs.

Event-driven workloads

Invoke functions through events from Apache Kafka, AWS SQS, Postgresql, Cron and MQTT.


A sandbox for your customer's code

With OpenFaaS Enterprise, you can take code snippets from your customers and use them to extend your platform.

You can use the Function Builder API to turn source code into functions, which can be deployed via the OpenFaaS REST API.

Network policies, resource limits, a runtime class, read-only filesystem and dedicated namespace means their code will be as isolated as possible.

Who's doing this already? Waylay (for custom functions for industrial IoT), Patchworks (customer extensions for e-commerce), LivePerson (for customer extensions) and Cognite (for custom ML/data-science functions).

ETL and data-pipelines

OpenFaaS Pro autoscaling can be fine-tuned to match the execution pattern of your functions, and to retry failed invocations.

Through JetStream for OpenFaaS, you can fan out to many thousands of executions, executing in parallel and scaling automatically.

ML models can be deployed as functions, and scaled, even if they run for a long time.

See also: Exploring the Fan out and Fan in pattern with OpenFaaS

Kubernetes, made usable

OpenFaaS enriches Kubernetes with scaling, queueing, monitoring and event triggers, so your team can focus on shipping features.

Customers have told us that even from early on, they were shipping new functionality to production within hours.

Functions can be run on-demand, or on a schedule, and can be triggered by events from your existing systems like Apache Kafka or AWS. If you invoke them asynchronously, you'll also get to benefit from retries without writing any additional code.

Deel migrated from AWS Lambda to Kubernetes, only to find that their code would no longer scale how they needed. They reached out to us, and we wrote up a reference architecture.

Learn more: Generate PDFs at scale on Kubernetes using OpenFaaS and Puppeteer

One function, many clouds

Does your product need to run on multiple different clouds? Perhaps you're considering writing an abstraction layer to cover your bases?

When faced with this question, built their low-code platform with OpenFaaS.

OpenFaaS functions are built as OCI images, which means they're portable, and integrate well with teams who use containers or Kubernetes already.

See also: Low-code automation with OpenFaaS

We speak your language

Functions can be written in any language, and are built into portable OCI images.

We have official templates available for: Go, Java, Python, C#, Ruby, Node.js, PHP, or you can write your own.

You can even bring along your existing microservices written with custom frameworks like Express.js, Vert.x, Flask, ASP.NET Core, FastAPI and Django.

Find templates in the store or build your own.

$ faas-cli new --lang java11 java-fn
package com.openfaas.function;

import com.openfaas.model.IHandler;
import com.openfaas.model.IResponse;
import com.openfaas.model.IRequest;
import com.openfaas.model.Response;

public class Handler implements com.openfaas.model.IHandler {

    public IResponse Handle(IRequest req) {
        Response res = new Response();
	    res.setBody("Hello, world!");

	    return res;
$ faas-cli template store pull golang-middleware
$ faas-cli new --lang golang-middleware go-fn
package function

import (

func Handle(w http.ResponseWriter, r *http.Request) {
	var input []byte

	if r.Body != nil {
		defer r.Body.Close()
		body, _ := io.ReadAll(r.Body)
		input = body

	w.Write([]byte(fmt.Sprintf("Body: %s", string(input))))
$ faas-cli template store pull python3-http
$ faas-cli new --lang python3-http python3-fn
def handle(event, context):
    return {
        "statusCode": 200,
        "body": "Hello from OpenFaaS!"
$ faas-cli new --lang node18 javascript-fn 
"use strict"

module.exports = async (event, context) => {
  const result = {
    status: "Received input: " + JSON.stringify(event.body)

  return context
$ faas-cli template store pull bash-streaming
$ faas-cli new --lang bash-streaming bash-fn
for i in $(seq 1 100)
    sleep 0.001
    echo "Hello" $i
$ faas-cli new --lang dockerfile ruby
FROM ruby:2.7-alpine

WORKDIR /home/app
COPY    .   .

RUN bundle install


CMD ["ruby", "main.rb"]

Trusted in production

Learn about OpenFaaS features & benefits

See options

Thank you to our sponsors

Become an individual or corporate sponsor via GitHub:

Become a sponsor

Start your Serverless Journey

Understand the use-cases for functions and learn at your own pace with the OpenFaaS handbook: Serverless For Everyone Else.


Run OpenFaaS in Production

OpenFaaS Pro is a commercially licensed version of OpenFaaS meant for production use.

You'll gain access to new features to make your team more efficient and productive.