Knative Build, Serve, and Event Explained
Key Points
- K Native, an open‑source project co‑created by IBM, Google and other industry leaders, adds serverless capabilities and native tooling to Kubernetes.
- It is built around three “primitives” – **Build**, **Serve**, and **Event** – which together enable developers to run serverless workloads on a Kubernetes cluster.
- The **Build** primitive streamlines the traditionally manual process of pulling source code, containerizing it, and pushing the image to a registry by handling the entire pipeline directly on the cluster, even supporting templates like Cloud Foundry buildpacks.
- By abstracting these steps, K Native lets developers focus on code rather than the complex YAML manifests and CI/CD tooling normally required for Kubernetes deployments.
Full Transcript
# Knative Build, Serve, and Event Explained **Source:** [https://www.youtube.com/watch?v=69OfdJ5BIzs](https://www.youtube.com/watch?v=69OfdJ5BIzs) **Duration:** 00:07:58 ## Summary - K Native, an open‑source project co‑created by IBM, Google and other industry leaders, adds serverless capabilities and native tooling to Kubernetes. - It is built around three “primitives” – **Build**, **Serve**, and **Event** – which together enable developers to run serverless workloads on a Kubernetes cluster. - The **Build** primitive streamlines the traditionally manual process of pulling source code, containerizing it, and pushing the image to a registry by handling the entire pipeline directly on the cluster, even supporting templates like Cloud Foundry buildpacks. - By abstracting these steps, K Native lets developers focus on code rather than the complex YAML manifests and CI/CD tooling normally required for Kubernetes deployments. ## Sections - [00:00:00](https://www.youtube.com/watch?v=69OfdJ5BIzs&t=0s) **Knative Overview: Build, Serve, Event** - Developer advocate explains Knative's three core primitives—Build, Serve, and Event—that enable serverless workloads and native tooling on top of Kubernetes. ## Full Transcript
hi everyone my name is Sai Benin and I'm
a developer advocate with IBM today I
want to talk about K native one of the
fastest growing open source projects in
the cloud native landscape today
k native was recently announced and was
developed by engineers from IBM Google
and a number of other industry leaders
essentially K native is a platform
installed on top of kubernetes and
essentially brings the capabilities of
server lists and running service
workloads to kubernetes in addition it
provides a number of utilities that make
working with your cloud native apps on
kubernetes feel truly native I'd say
there's three major components that make
up Kay native on top of kubernetes so
the first is going to be build next
we've got serve and finally we have
event these three components are
actually called primitives and the
reason is because they are the building
blocks that make up Kay native and
essentially are what allow it to run
serverless workloads within kubernetes
but in addition also provide endpoints
and tools to make working with Kay
native feel more natural and easy so
let's get started with build I like to
start with an example so what does every
developer need to do when pushing their
application to kubernetes well first
they need to start with code right so
every developer has code you can imagine
it's probably hosted up on github so
we've got that and and the next thing we
want to do is take that code and turn it
into a container because the first step
is code the second step is always going
to be turning it into that container
something that kubernetes and docker or
whatever container technology you might
be using can understand so to do that it
might be something really simple like a
docker build or depending on how complex
your build is it could be a set of steps
to end up with that last that final
container image once that image is
developed and by the way to actually
make that process happen you'll need to
pull the code down to a local machine or
you know have
something like Travis or Jenkins make
that container build for you right so
once that's created you'll want to push
that to a cloud registry something like
docker hub or maybe a private image
registry but essentially once it's up
there kubernetes is now able to actually
find it and deploy it and to do that
you'll probably want to create some
much-loved manifesto files and depending
on how complex your deploy is you might
have multiple KML files to make that
deployment happen you can imagine that
for a developer who is iteratively
developing on top of kubernetes this is
a lot of steps and it can be quite
tedious with Kay native we can bring
this entire process on to your
kubernetes cluster so everything from
source code management complex or custom
builds or even if you wanted to there's
a number of templates out there so for
example if you like Cloud Foundry build
packs there's a template for that to
build your application so with Kay need
a build you can do all of that within
your cluster itself it makes it a lot
easier for developers doing kind of
iterative development and especially
because this all these steps can be
simplified into just a single manifest
deploy you know it just becomes faster
and becomes more agile to develop
applications so we've talked about build
so the next thing I want to talk about
is serve serve has a very important role
here and I think it's one of the more
exciting parts of Kay native it actually
comes with sto components kind of built
in and if you're not familiar with sto
check the link in the description below
for more information but to kind of
summarize it still comes with a number
of capabilities things like traffic
management intelligent routing automatic
scaling and as well as scale to zero
which is a pretty cool concept but
essentially with several applications
you want to be able to scale up to say
maybe a thousand pods and then bring it
all the way back down to zero if no one
is accessing that service so let's take
a look at what a sample
serve the service rather that's managed
by K native Serb would look like so at
the top we'll start with a service and
this can be you know your traditional
kind of micro service or it can be kind
of a function as well so that service is
pointing and managing two different
things it's going to be one of route to
a config there's one really cool thing
about Canada server that I haven't
mentioned yet and it's the fact that
every time you do a push it'll actually
keep that revision stored so let's say
we've done a couple pushes to this
service so we've got revision 1 as well
as revision 2 where revision 2 is the
newer version of the app and config is
actually going to manage both of those
the route essentially is managing all
the traffic and it routes them to one or
actually more of those revisions
so using sto traffic management
capabilities we could say let's say 10%
of all traffic gets routed to revision 2
and 90% stays on revision 1 that way we
can start planning to do a staged
rollout or even do some a B testing
so again K need have served just to kind
of summarize provides snapshots it gives
us intelligent routing as well as
scaling all really cool features and I
think build and serve together are going
to be solving a lot of problems that
people might be having when doing CI CD
and doing micro service deployment to
kubernetes the last thing I want to talk
about is eventing this is one of those
things that's still a work in progress
in K native you know it's one of those
projects that's been released recently
you know at the time that we're creating
this video it's still a work in progress
but there are a number of capabilities
that are available so with the venting
it's kind of like an integral part of
any serverless platform you need the
ability to create triggers some sort of
event that gets responded to by the
platform itself let's say for example
that
you have a delivery rerouting algorithm
that anytime inclement weather is
detected you want to trigger that
algorithm at that serverless action
that's something that eventing would
allow you to set up is triggers another
thing you can do with eventing it's a
different use case but you can kind of
hook it into your CI CD pipeline let's
say once you have this whole flow
created you want to kick that off
automatically anytime there's a new push
to master or maybe anytime there's a new
push to master you want to say 10% of
traffic gets pushed that version of the
app so with the venting you can make
that reality
so creating pipelines with the venting
is also an option and as this feature
gets kind of more developed becomes more
robust we'll see a number of kind of
options and opportunities for taking
advantage of K native eventing so these
three components together are what make
a native so powerful K native is
definitely shaping up to be one of the
biggest players in the cloud native and
kubernetes landscape I hope you enjoyed
my explanation of K native today
definitely stay tuned for more light
boarding sessions in the future and
again if you want to learn more check
out the IBM cloud blog