Learning Library

← Back to Library

Open-Source AI Stack Guide

Key Points

  • Open‑source AI can be built end‑to‑end with freely available components—models, data pipelines, orchestration, and application layers—offering a multi‑trillion‑dollar value and rapid community‑driven innovation.
  • The core of the stack is the model: open‑source options include base LLMs, community‑fine‑tuned variants for specific tasks or domains, and specialized models (e.g., biomedical image anomaly detectors), whereas closed models are accessed via managed APIs.
  • Using open‑source models requires you to provide your own inference engine (e.g., Ollama for local use, vLLM or TensorRT‑LLM for servers), while closed‑source APIs handle inference, optimization, and infrastructure for you.
  • The data layer is largely similar for both approaches—developers must identify data sources, build connectors, and convert unstructured inputs (like PDFs) into structured formats before feeding them to the model.

Full Transcript

# Open-Source AI Stack Guide **Source:** [https://www.youtube.com/watch?v=_QfxGZGITGw](https://www.youtube.com/watch?v=_QfxGZGITGw) **Duration:** 00:08:21 ## Summary - Open‑source AI can be built end‑to‑end with freely available components—models, data pipelines, orchestration, and application layers—offering a multi‑trillion‑dollar value and rapid community‑driven innovation. - The core of the stack is the model: open‑source options include base LLMs, community‑fine‑tuned variants for specific tasks or domains, and specialized models (e.g., biomedical image anomaly detectors), whereas closed models are accessed via managed APIs. - Using open‑source models requires you to provide your own inference engine (e.g., Ollama for local use, vLLM or TensorRT‑LLM for servers), while closed‑source APIs handle inference, optimization, and infrastructure for you. - The data layer is largely similar for both approaches—developers must identify data sources, build connectors, and convert unstructured inputs (like PDFs) into structured formats before feeding them to the model. ## Sections - [00:00:00](https://www.youtube.com/watch?v=_QfxGZGITGw&t=0s) **Open‑Source AI Stack Overview** - The passage outlines the key components, benefits, and trade‑offs of building AI systems entirely with open‑source models, data, orchestration, and application layers, guiding developers on when to choose open versus closed AI solutions. - [00:04:51](https://www.youtube.com/watch?v=_QfxGZGITGw&t=291s) **Open vs Closed AI Deployment** - The speaker contrasts open‑source AI stacks—offering full control over deployment location and customizable orchestration through agent frameworks—with closed‑source, fully managed APIs that simplify integration but limit deployment and orchestration flexibility. ## Full Transcript
0:00Whether you're creating a simple AI chatbot or a complex AI agent, it's possible to architect a 0:06solution from end-to-end fully with open-source components. Understanding these pieces, how they 0:13work, the benefits, the frictions will help you evaluate where you want to consider 0:19open versus closed solutions in what you're 0:26building. Researchers from Harvard Business School estimate the value of all open-source software—which 0:32means software whose source code is publicly available and distributed freely—to be 0:38worth 8.8 trillion dollars. Within AI specifically, many of the most 0:45exciting new features from commercial AI tools are rapidly recreated as open-source 0:51implementations, which are made by—and distributed freely among—the AI community. Here, we'll cover 0:57the main components of open-source AI—models, data, orchestration and the application layer—and 1:04talk about the tradeoffs for each. Because open source focuses on software, we'll exclude the 1:10infrastructure and hardware layer, but that's a really important consideration too. Deciding 1:15whether to use open or closed AI in your stack is 1:22one of the most important choices a developer will make. The central point of the AI stack is 1:28the model. Different types of open-source models are available. They 1:35can range from LLMs—large language models—that are base tuned, as 1:42well as fine-tuned versions that are created by the community and made available for others to 1:48use. These could be fine-tuned on specific tasks, like question and answer, or on specific domains, 1:55like the legal domain. There's also other specialized models 2:01that are available in the open source. An example of this would be a model to do anomaly detection 2:08in biomedical images. If you're using an open-source model, you will also have to uh, implement 2:14your own inference engine to actually run these models. Options to run these models 2:21include open-source libraries that allow you to run these models on your laptop—one 2:28of the most famous ones being Ollama—or open-source inference engines to run 2:34on a server. Popular examples of this include vLLM or TensorRT LLM. 2:42On the other hand, if you're using a closed model, these are usually available via an API. 2:49You have to worry about making a call to the API, but this often means that the other layers of the 2:55stack are fully managed for you. You don't have to worry about the inference engine to run the model—including 3:00the optimizations to make it efficient—or worry about the infrastructure it runs on. The 3:06next layer of the stack is data. Now, this is one layer where actually the elements 3:13for both open and closed are the same. So, first you have to consider what are your data sources 3:19that you want to bring in to supplement or augment your AI model. There's your data 3:25connectors or integration to pull in data in a more automated way 3:32from tools or other sources. data conversion. If 3:39there is data you want to use in your AI system, but it exists in an unstructured way in a PDF, you 3:46first have to convert that to a more structured format. And then there is RAG 3:52pipelines and vector DBs, which is 3:59where you store your data once it's been vectorized into embeddings so that your model can 4:05pull it into context. Now these elements are the same between open and closed, but what varies is, 4:11of course, one is uh, open-source code that is freely available, one is not. Uh, 4:18so, one benefit of open source is that it's available for free, Closer, closed source 4:24is usually part of a commercial tool. So, one consideration is: 4:33Is it freely available? Next is, with open source, because the source code is 4:39available, you can actually make customizations or adaptations that you need to. Whereas with closed 4:45source, some of those might actually already be built out of the box. But if they aren't, you don't 4:49have the ability to customize. 4:57And then third is control over where it's deployed. So, because the open-source code is 5:03source code that's freely available to you, you can set this up on any server that you choose. You 5:08can keep it on premise, or you can deploy it to a public cloud. With closed offerings, these are 5:13mostly available via an API in a fully managed hosted solution, so you don't have as much control 5:19over where your private data might be going to or coming from. 5:31The next layer is orchestrate. Orchestrate defines how 5:38you break down your AI system into smaller tasks. This could be things like reasoning and 5:44planning how your AI system will tackle a problem. 5:53It could also include executing, so actually making tool calls or function 5:59calls. It could include loops to review what your agent 6:06has k-come up with and improve the quality of the response. How you actually implement these things 6:12is determined by which open-source agent framework you choose. 6:22On the other hand, in a fully closed source stack, there are some commercial platforms that allow 6:29you to do agentic tasks and control the orchestration through an API. So 6:36what you have to worry about is making an API call that matches up with those specs. It is, on 6:42the one hand, simpler, but in some cases might be oversimplified because you don't have as much 6:47control around the exact structure of your agent and ability to customize it uh, as if you're dealing 6:52with an open-source agent framework. Finally, is the application layer. This 6:59defines the interface that your user will use to interact with your AI solution. On the open 7:05side, the solutions to do this emphasize customizability. 7:15So you could use things like open web UI or anything LLM to give you full control over the 7:20user experience. There's also options to optimize for quick setup. 7:28Things like Gradio or Streamlit, which let you very quick-quickly create web-based interfaces 7:35with minimal setup for interacting with an LLM or AI-based solution. On the closed side, 7:43the primary route would be to build from scratch. 7:50So this would mean embedding your AI solution directly in the application, whether it's a web 7:54application or the mobile application that this fits into. Understanding each of these layers—models, 8:00data, orchestration and application—gives you insight to make informed choices. There may be 8:07cases where you want the convenience of prebuilt closed-source solutions, but it's also valuable to 8:13remember open-source AI options, which offer transparent and adaptable solutions that benefit 8:19from community innovations.