Chapter 2
Fission Architecture Deep Dive
Fission stands out as a robust, extensible Kubernetes-native serverless platform, distinguished by its modular architecture and high-performance design. This chapter dissects the inner workings of Fission, peeling back the layers of its core components, APIs, and plug-in systems. Readers will gain deep visibility into how Fission orchestrates code execution, enabling the flexibility and control required for advanced, production-grade workloads.
2.1 Fission Core Components
Fission's architecture is fundamentally organized around four principal subsystems: the Controller, Executor, Router, and Builder. Each component addresses a key responsibility within the function deployment and execution lifecycle, collectively ensuring system reliability, scalability, and high performance. The modular nature of these subsystems provides a clear separation of concerns while enabling extensibility and customization tailored to diverse operational environments.
The Controller acts as the central orchestrator of the Fission framework. It manages API requests related to function lifecycle operations such as creation, update, and deletion, interfacing directly with the etcd-based data store to persist function metadata, environment configurations, and associated resources. The Controller's core responsibility is to maintain cluster-wide state consistency and enforce policies regarding function deployment. To facilitate scalability and fault tolerance, the Controller operates in a stateless manner, delegating state management externally.
Function deployment begins with the Controller invoking the Builder, which transforms source code into executable artifacts. The Builder subsystem abstracts the compilation and packaging process, supporting multiple build strategies and environments. It leverages container-based build processes to produce lightweight, language-specific function containers or binaries optimized for runtime execution. The design ensures that builds are reproducible and isolated by allowing custom builder implementations to be plugged in, accommodating specialized language runtimes or proprietary build pipelines.
Once functions are built, the Executor takes responsibility for provisioning and managing execution environments-a critical aspect in the serverless paradigm. The Executor deploys function containers across nodes within the Kubernetes cluster, incorporating caching strategies and resource pooling to minimize cold start latency. It monitors container health, auto-scales deployed instances based on workload, and gracefully handles failures or restarts. The pluggable Executor interface allows support for various runtimes and execution models, including traditional containers and WebAssembly modules, enabling users to select or implement an executor that matches their performance and resource utilization goals.
Routing of incoming function invocations is handled by the Router, which sits at the front line of traffic ingress. The Router is responsible for mapping HTTP requests to the appropriate function versions and forwarding them efficiently to the corresponding Executor-managed execution units. Its design emphasizes low-latency request dispatch, load balancing, and intelligent retry mechanisms to maintain high availability and responsiveness. The Router maintains a dynamic routing table that reflects real-time function deployment status, querying the Controller for updates as functions scale or redeploy.
The interactions between these subsystems form a tightly integrated workflow. A request to deploy a new function triggers the Controller to validate and store metadata, then initiates the Builder to generate function artifacts. Upon build completion, the Controller instructs the Executor to provision execution resources. The Router dynamically updates its routing to include the new function endpoint, seamlessly directing traffic to ready instances. During runtime, the Controller continues to monitor system health and usage metrics, enabling informed scaling decisions for the Executor and efficient traffic management by the Router.
The modular design of Fission's core components embodies principles conducive to extensibility and customized deployments. Each subsystem exposes well-defined APIs and interfaces, permitting alternative implementations without disrupting overall system integrity. For example, organizations requiring enhanced security or compliance may integrate custom authorization logic in the Controller, while performance-sensitive scenarios can leverage optimized Executors or Builders tailored for specific hardware or runtime environments. This separation of concerns also simplifies debugging, testing, and independent scaling of each subsystem, contributing to Fission's robustness in production settings.
In summary, the Controller, Builder, Executor, and Router collectively constitute a cohesive architecture that underpins Fission's serverless capabilities. Their coordinated functionality covers the full spectrum from code submission to invocation routing, balancing flexibility with operational efficiency. The pluggability and clear delineation of responsibilities embedded in this architecture make Fission a versatile platform enabling diverse use cases across cloud-native ecosystems.
2.2 CRDs, APIs, and Function Lifecycle
Fission's architecture leverages Kubernetes Custom Resource Definitions (CRDs) as a foundational mechanism to extend the Kubernetes API with function-specific abstractions. These CRDs, combined with controller loops and a well-defined API schema, enable comprehensive lifecycle management of serverless functions, facilitating automation and scalability within a Kubernetes-native environment.
At the core of Fission's extensibility are several CRDs, including Function, Environment, HTTPTrigger, and TimeTrigger. Each CRD encapsulates operational metadata and configuration details pertinent to different facets of function deployment and invocation. The Function CRD represents a user-defined function: its source code, environment reference, deployment strategy, and resource requirements. The Environment CRD abstracts underlying runtimes such as Node.js, Python, or Go, allowing consistent and reusable execution contexts.
The CRDs closely follow Kubernetes API conventions including spec, status, and metadata fields, promoting declarative configuration and reconciliation. For example, the Function spec contains the pointer to the function's source (e.g., Git repository or inline code), environment selection, and scaling parameters. The associated status tracks runtime state such as active pod count and last deployment status.
A typical Function CRD manifest illustrates this structure:
apiVersion: fission.io/v1 kind: Function metadata: name: hello-world spec: environment: nodejs package: type: git url: https://github.com/example/hello.git subpath: / ...