Parameter-Efficient Fine-Tuning (PEFT) Techniques for LLMs Training Course
Parameter-Efficient Fine-Tuning (PEFT) is a collection of techniques that enable efficient adaptation of large language models (LLMs) by modifying only a small subset of parameters.
This instructor-led, live training (online or onsite) is aimed at intermediate-level data scientists and AI engineers who wish to fine-tune large language models more affordably and efficiently using methods like LoRA, Adapter Tuning, and Prefix Tuning.
By the end of this training, participants will be able to:
- Understand the theory behind parameter-efficient fine-tuning approaches.
- Implement LoRA, Adapter Tuning, and Prefix Tuning using Hugging Face PEFT.
- Compare performance and cost trade-offs of PEFT methods vs. full fine-tuning.
- Deploy and scale fine-tuned LLMs with reduced compute and storage requirements.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction to Parameter-Efficient Fine-Tuning (PEFT)
- Motivation and limitations of full fine-tuning
- Overview of PEFT: goals and benefits
- Applications and use cases in industry
LoRA (Low-Rank Adaptation)
- Concept and intuition behind LoRA
- Implementing LoRA using Hugging Face and PyTorch
- Hands-on: Fine-tuning a model with LoRA
Adapter Tuning
- How adapter modules work
- Integration with transformer-based models
- Hands-on: Applying Adapter Tuning to a transformer model
Prefix Tuning
- Using soft prompts for fine-tuning
- Strengths and limitations compared to LoRA and adapters
- Hands-on: Prefix Tuning on an LLM task
Evaluating and Comparing PEFT Methods
- Metrics for evaluating performance and efficiency
- Trade-offs in training speed, memory usage, and accuracy
- Benchmarking experiments and result interpretation
Deploying Fine-Tuned Models
- Saving and loading fine-tuned models
- Deployment considerations for PEFT-based models
- Integrating into applications and pipelines
Best Practices and Extensions
- Combining PEFT with quantization and distillation
- Use in low-resource and multilingual settings
- Future directions and active research areas
Summary and Next Steps
Requirements
- An understanding of machine learning fundamentals
- Experience working with large language models (LLMs)
- Familiarity with Python and PyTorch
Audience
- Data scientists
- AI engineers
Open Training Courses require 5+ participants.
Parameter-Efficient Fine-Tuning (PEFT) Techniques for LLMs Training Course - Booking
Parameter-Efficient Fine-Tuning (PEFT) Techniques for LLMs Training Course - Enquiry
Parameter-Efficient Fine-Tuning (PEFT) Techniques for LLMs - Consultancy Enquiry
Consultancy Enquiry
Upcoming Courses
Related Courses
Advanced LangGraph: Optimization, Debugging, and Monitoring Complex Graphs
35 HoursLangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and control over execution.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI platform engineers, DevOps for AI, and ML architects who wish to optimize, debug, monitor, and operate production-grade LangGraph systems.
By the end of this training, participants will be able to:
- Design and optimize complex LangGraph topologies for speed, cost, and scalability.
- Engineer reliability with retries, timeouts, idempotency, and checkpoint-based recovery.
- Debug and trace graph executions, inspect state, and systematically reproduce production issues.
- Instrument graphs with logs, metrics, and traces, deploy to production, and monitor SLAs and costs.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Coding Agents with Devstral: From Agent Design to Tooling
14 HoursDevstral is an open-source framework designed for building and running coding agents that can interact with codebases, developer tools, and APIs to enhance engineering productivity.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level ML engineers, developer-tooling teams, and SREs who wish to design, implement, and optimize coding agents using Devstral.
By the end of this training, participants will be able to:
- Set up and configure Devstral for coding agent development.
- Design agentic workflows for codebase exploration and modification.
- Integrate coding agents with developer tools and APIs.
- Implement best practices for secure and efficient agent deployment.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Open-Source Model Ops: Self-Hosting, Fine-Tuning and Governance with Devstral & Mistral Models
14 HoursDevstral and Mistral models are open-source AI technologies designed for flexible deployment, fine-tuning, and scalable integration.
This instructor-led, live training (online or onsite) is aimed at intermediate–level to advanced–level ML engineers, platform teams, and research engineers who wish to self-host, fine-tune, and govern Mistral and Devstral models in production environments.
By the end of this training, participants will be able to:
- Set up and configure self-hosted environments for Mistral and Devstral models.
- Apply fine-tuning techniques for domain-specific performance.
- Implement versioning, monitoring, and lifecycle governance.
- Ensure security, compliance, and responsible usage of open-source models.
Format of the Course
- Interactive lecture and discussion.
- Hands-on exercises in self-hosting and fine-tuning.
- Live-lab implementation of governance and monitoring pipelines.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph Applications in Finance
35 HoursLangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and control over execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and operate LangGraph-based finance solutions with proper governance, observability, and compliance.
By the end of this training, participants will be able to:
- Design finance-specific LangGraph workflows aligned to regulatory and audit requirements.
- Integrate financial data standards and ontologies into graph state and tooling.
- Implement reliability, safety, and human-in-the-loop controls for critical processes.
- Deploy, monitor, and optimize LangGraph systems for performance, cost, and SLAs.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph Foundations: Graph-Based LLM Prompting and Chaining
14 HoursLangGraph is a framework for building graph-structured LLM applications that support planning, branching, tool use, memory, and controllable execution.
This instructor-led, live training (online or onsite) is aimed at beginner-level developers, prompt engineers, and data practitioners who wish to design and build reliable, multi-step LLM workflows using LangGraph.
By the end of this training, participants will be able to:
- Explain core LangGraph concepts (nodes, edges, state) and when to use them.
- Build prompt chains that branch, call tools, and maintain memory.
- Integrate retrieval and external APIs into graph workflows.
- Test, debug, and evaluate LangGraph apps for reliability and safety.
Format of the Course
- Interactive lecture and facilitated discussion.
- Guided labs and code walkthroughs in a sandbox environment.
- Scenario-based exercises on design, testing, and evaluation.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph in Healthcare: Workflow Orchestration for Regulated Environments
35 HoursLangGraph enables stateful, multi-actor workflows powered by LLMs with precise control over execution paths and state persistence. In healthcare, these capabilities are crucial for compliance, interoperability, and building decision-support systems that align with medical workflows.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and manage LangGraph-based healthcare solutions while addressing regulatory, ethical, and operational challenges.
By the end of this training, participants will be able to:
- Design healthcare-specific LangGraph workflows with compliance and auditability in mind.
- Integrate LangGraph applications with medical ontologies and standards (FHIR, SNOMED CT, ICD).
- Apply best practices for reliability, traceability, and explainability in sensitive environments.
- Deploy, monitor, and validate LangGraph applications in healthcare production settings.
Format of the Course
- Interactive lecture and discussion.
- Hands-on exercises with real-world case studies.
- Implementation practice in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph for Legal Applications
35 HoursLangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and precise control over execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and operate LangGraph-based legal solutions with the necessary compliance, traceability, and governance controls.
By the end of this training, participants will be able to:
- Design legal-specific LangGraph workflows that preserve auditability and compliance.
- Integrate legal ontologies and document standards into graph state and processing.
- Implement guardrails, human-in-the-loop approvals, and traceable decision paths.
- Deploy, monitor, and maintain LangGraph services in production with observability and cost controls.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Dynamic Workflows with LangGraph and LLM Agents
14 HoursLangGraph is a framework for composing graph-structured LLM workflows that support branching, tool use, memory, and controllable execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level engineers and product teams who wish to combine LangGraph’s graph logic with LLM agent loops to build dynamic, context-aware applications such as customer support agents, decision trees, and information retrieval systems.
By the end of this training, participants will be able to:
- Design graph-based workflows that coordinate LLM agents, tools, and memory.
- Implement conditional routing, retries, and fallbacks for robust execution.
- Integrate retrieval, APIs, and structured outputs into agent loops.
- Evaluate, monitor, and harden agent behavior for reliability and safety.
Format of the Course
- Interactive lecture and facilitated discussion.
- Guided labs and code walkthroughs in a sandbox environment.
- Scenario-based design exercises and peer reviews.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph for Marketing Automation
14 HoursLangGraph is a graph-based orchestration framework that enables conditional, multi-step LLM and tool workflows, ideal for automating and personalizing content pipelines.
This instructor-led, live training (online or onsite) is aimed at intermediate-level marketers, content strategists, and automation developers who wish to implement dynamic, branching email campaigns and content generation pipelines using LangGraph.
By the end of this training, participants will be able to:
- Design graph-structured content and email workflows with conditional logic.
- Integrate LLMs, APIs, and data sources for automated personalization.
- Manage state, memory, and context across multi-step campaigns.
- Evaluate, monitor, and optimize workflow performance and delivery outcomes.
Format of the Course
- Interactive lectures and group discussions.
- Hands-on labs implementing email workflows and content pipelines.
- Scenario-based exercises on personalization, segmentation, and branching logic.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Le Chat Enterprise: Private ChatOps, Integrations & Admin Controls
14 HoursLe Chat Enterprise is a private ChatOps solution that provides secure, customizable, and governed conversational AI capabilities for organizations, with support for RBAC, SSO, connectors, and enterprise app integrations.
This instructor-led, live training (online or onsite) is aimed at intermediate-level product managers, IT leads, solution engineers, and security/compliance teams who wish to deploy, configure, and govern Le Chat Enterprise in enterprise environments.
By the end of this training, participants will be able to:
- Set up and configure Le Chat Enterprise for secure deployments.
- Enable RBAC, SSO, and compliance-driven controls.
- Integrate Le Chat with enterprise applications and data stores.
- Design and implement governance and admin playbooks for ChatOps.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Cost-Effective LLM Architectures: Mistral at Scale (Performance / Cost Engineering)
14 HoursMistral is a high-performance family of large language models optimized for cost-effective production deployment at scale.
This instructor-led, live training (online or onsite) is aimed at advanced-level infrastructure engineers, cloud architects, and MLOps leads who wish to design, deploy, and optimize Mistral-based architectures for maximum throughput and minimum cost.
By the end of this training, participants will be able to:
- Implement scalable deployment patterns for Mistral Medium 3.
- Apply batching, quantization, and efficient serving strategies.
- Optimize inference costs while maintaining performance.
- Design production-ready serving topologies for enterprise workloads.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Productizing Conversational Assistants with Mistral Connectors & Integrations
14 HoursMistral AI is an open AI platform that enables teams to build and integrate conversational assistants into enterprise and customer-facing workflows.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level product managers, full-stack developers, and integration engineers who wish to design, integrate, and productize conversational assistants using Mistral connectors and integrations.
By the end of this training, participants will be able to:
- Integrate Mistral conversational models with enterprise and SaaS connectors.
- Implement retrieval-augmented generation (RAG) for grounded responses.
- Design UX patterns for internal and external chat assistants.
- Deploy assistants into product workflows for real-world use cases.
Format of the Course
- Interactive lecture and discussion.
- Hands-on integration exercises.
- Live-lab development of conversational assistants.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Enterprise-Grade Deployments with Mistral Medium 3
14 HoursMistral Medium 3 is a high-performance, multimodal large language model designed for production-grade deployment across enterprise environments.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level AI/ML engineers, platform architects, and MLOps teams who wish to deploy, optimize, and secure Mistral Medium 3 for enterprise use cases.
By the end of this training, participants will be able to:
- Deploy Mistral Medium 3 using API and self-hosted options.
- Optimize inference performance and costs.
- Implement multimodal use cases with Mistral Medium 3.
- Apply security and compliance best practices for enterprise environments.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Mistral for Responsible AI: Privacy, Data Residency & Enterprise Controls
14 HoursMistral AI is an open and enterprise-ready AI platform that provides features for secure, compliant, and responsible AI deployment.
This instructor-led, live training (online or onsite) is aimed at intermediate-level compliance leads, security architects, and legal/ops stakeholders who wish to implement responsible AI practices with Mistral by leveraging privacy, data residency, and enterprise control mechanisms.
By the end of this training, participants will be able to:
- Implement privacy-preserving techniques in Mistral deployments.
- Apply data residency strategies to meet regulatory requirements.
- Set up enterprise-grade controls such as RBAC, SSO, and audit logs.
- Evaluate vendor and deployment options for compliance alignment.
Format of the Course
- Interactive lecture and discussion.
- Compliance-focused case studies and exercises.
- Hands-on implementation of enterprise AI controls.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Multimodal Applications with Mistral Models (Vision, OCR, & Document Understanding)
14 HoursMistral models are open-source AI technologies that now extend into multimodal workflows, supporting both language and vision tasks for enterprise and research applications.
This instructor-led, live training (online or onsite) is aimed at intermediate-level ML researchers, applied engineers, and product teams who wish to build multimodal applications with Mistral models, including OCR and document understanding pipelines.
By the end of this training, participants will be able to:
- Set up and configure Mistral models for multimodal tasks.
- Implement OCR workflows and integrate them with NLP pipelines.
- Design document understanding applications for enterprise use cases.
- Develop vision-text search and assistive UI functionalities.
Format of the Course
- Interactive lecture and discussion.
- Hands-on coding exercises.
- Live-lab implementation of multimodal pipelines.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.