smolagents

DeepResearch Part 3: Getting the best web data for your research

Summary This post details building a robust web data pipeline using SmolAgents. We’ll create tools to retrieve content from various web endpoints, convert it to a consistent format (Markdown), store it efficiently, and then evaluate its relevance and quality using Large Language Models (LLMs). This pipeline is crucial for building a knowledge base for LLM applications. Web Data Convertor (MarkdownConverter) We leverage the MarkdownConverter class, inspired by the one in autogen, to handle the diverse formats encountered on the web.

DeepResearch Part 1: Building an arXiv Search Tool with SmolAgents

Summary This post kicks off a series of three where we’ll build, extend, and use the open-source DeepResearch application inspired by the Hugging Face blog post. In this first part, we’ll focus on creating an arXiv search tool that can be used with SmolAgents. DeepResearch aims to empower research by providing tools that automate and streamline the process of discovering and managing academic papers. This series will demonstrate how to build such tools, starting with a powerful arXiv search tool.

DeepResearch Part 1: Building an arXiv Search Tool with SmolAgents

Summary This post kicks off a series of three where we’ll build, extend, and use the open-source DeepResearch application inspired by the Hugging Face blog post. In this first part, we’ll focus on creating an arXiv search tool that can be used with SmolAgents. DeepResearch aims to empower research by providing tools that automate and streamline the process of discovering and managing academic papers. This series will demonstrate how to build such tools, starting with a powerful arXiv search tool.

Rag: Retrieval-Augmented Generation

Summary Retrieval-Augmented Generation (RAG) is a powerful technique that enhances large language models (LLMs) by allowing them to use external knowledge sources. An Artificial Intelligence (AI) system consists of components working together to apply knowledge learned from data. Some common components of those systems are: Large Language Model (LLM): Typically the core component of the system, often there is more than one. These are large models that have been trained on massive amounts of data and can make intelligent predictions based on their training.

CAG: Cache-Augmented Generation

Summary CAG performs better but does not solve the key reason RAG was created small context windows. Retrieval-Augmented Generation (RAG) is currently(early 2025) the most popular way to use external knowledge in current LLM opperations. RAG allows you to enhance your LLM with data beyond the data it was trained on. Ther are many great RAG solutions and products. RAG has some drawbacks - There can be significant retreival latency as it searches for and organizes the correct data.

Agents: A tutorial on building agents in python

LLM Agents Agents are used enhance and extend the functionality of LLM’s. In this tutorial, we’ll explore what LLM agents are, how they work, and how to implement them in Python. What Are LLM Agents? An agent is an autonomous process that may use the LLM and other tools multiple times to achieve a goal. The LLM output often controls the workflow of the agent(s). What is the difference between Agents and LLMs or AI?