News
Latest
Top
Search
Submit
Login
Search
▲
5
Free Beta: Fine-tuning SDK for LLMs, comments welcome
(hpc-ai.com)
by CrazyLLM |
view
|
1 comments
▲
4
Show HN: ShadowPEFT – Centralized and Detachable Parameter-Efficient Fine-Tuning
(github.com)
by yokee |
view
|
0 comments
▲
4
Show HN: LLM fine-tuning without infra or ML expertise (early access)
(tinytune.xyz)
by Jacques2Marais |
view
|
4 comments
▲
3
Ask HN: Anyone Succesfully fine-tuning LLMs?
by Mythli |
view
|
2 comments
▲
2
A Survey on Federated Fine-Tuning of Large Language Models
(openreview.net)
by mldev_exe |
view
|
0 comments
▲
2
Show HN: FT-Lab – Lightweight TinyLlama Fine-Tuning (Full FT / LoRA / QLoRA)
(github.com)
by Sai-HN |
view
|
0 comments
▲
2
FT-Lab: A Lightweight Toolkit for Fine-Tuning and RAG Evaluation
by Sai-HN |
view
|
0 comments
▲
1
Desktop app for generating LLM fine-tuning datasets
(github.com)
by AronDaron |
view
|
1 comments
▲
1
LLM from scratch (32l) – Interventions: updated instruction fine-tuning results
(gilesthomas.com)
by gpjt |
view
|
0 comments
▲
1
Fine-tuning and deploying Gemma 4 is not that easy
(ghost.oxen.ai)
by eloyalbmartinez |
view
|
0 comments
▲
1
A guide to model quantization in fine-tuning (and how to pick the right GGUF)
(siquick.com)
by siquick |
view
|
0 comments
▲
1
Gemma 4 Fine-Tuning Guide
(unsloth.ai)
by danielhanchen |
view
|
0 comments
▲
1
No fine-tuning, no RAG – boosting Claude Code's bioinformatics up to 92%
(github.com)
by jaechang |
view
|
1 comments
▲
1
Fine-tuning Whisper to my speech: 27% to 6.5% WER
(vivekkairi.com)
by vivekkairi |
view
|
0 comments
▲
1
Show HN: Pre-training, fine-tuning, and evals platform
(oumi.ai)
by oli_kitty |
view
|
0 comments
▲
1
Fine-Tuning Large Language Models LLMs with a Production-Grade Pipeline (2023)
(alex000kim.com)
by teleforce |
view
|
0 comments
▲
1
How fine-tuning made my chatbot worse and broke my RAG pipeline
(adandai.wordpress.com)
by allessa |
view
|
0 comments
▲
1
Autonomous RL Fine-Tuning on Ephemeral GPUs: Extending Karpathy's Autoresearch
(templarresearch.substack.com)
by synapz_org |
view
|
0 comments
▲
1
An Efficient Heterogeneous Co-Design for Fine-Tuning on a Single GPU
(arxiv.org)
by matt_d |
view
|
0 comments
▲
1
Why aren't we fine-tuning more?
(natemeyvis.com)
by gmays |
view
|
0 comments
▲
1
Why aren't we fine-tuning more?
(natemeyvis.com)
by vinhnx |
view
|
0 comments
▲
1
TrajectoryKit, a competitive open-source deep research agent without fine-tuning
(williamlugoloobi.com)
by stansApprentice |
view
|
0 comments
▲
1
Show HN: Shard-based scheduling for 100x more fine-tuning experiments on 4 GPUs
(rapidfire.ai)
by kamranrapidfire |
view
|
0 comments
▲
1
Neurvance, Pre-cleaned datasets for LLM fine-tuning, free to download
(neurvance.com)
by Adam_SDDk |
view
|
0 comments
▲
1
Cursor's 'Composer 2' model is apparently just Kimi K2.5 with RL fine-tuning
(old.reddit.com)
by limoce |
view
|
0 comments
▲
1
Cursor's Composer 2 model identifier reveals Kimi K2.5 base with RL fine-tuning
(twitter.com)
by fynnx |
view
|
1 comments
▲
1
PMetal – (Powdered Metal) LLM Fine-Tuning Framework for Apple Silicon
(github.com)
by epistates |
view
|
1 comments
▲
1
30 years fine-tuning micro-homestead oasis
(youtube.com)
by fallinditch |
view
|
0 comments
▲
1
Generator SFT and DPO datasets for tool-calling LoRA fine-tuning (no LLM needed)
(nothumanallowed.com)
by senza1dio |
view
|
1 comments
▲
1
Show HN: Fast-Axolotl – Rust extensions that make Axolotl fine-tuning 77x faster
(github.com)
by ticktockten |
view
|
0 comments
▲
1
Reinforcement fine-tuning use cases
(developers.openai.com)
by teleforce |
view
|
0 comments
▲
1
MARL: Runtime Middleware That Reduces LLM Hallucination Without Fine-Tuning
(huggingface.co)
by seawolf2357 |
view
|
0 comments
▲
1
Show HN: QLoRA fine-tuning in .zse INT4 format by ZSE
by zyoralabs |
view
|
0 comments
▲
1
Qwen3.5 Fine-Tuning Guide – Unsloth Documentation
(unsloth.ai)
by bilsbie |
view
|
0 comments
▲
1
Bio-Inspired Adapters: Improving Models Beyond LoRA Fine-Tuning
(genbais.com)
by lazarko |
view
|
0 comments
▲
1
Fine-Tuning Qwen3 Embeddings for product category classification
(blog.ivan.digital)
by ipotapov |
view
|
0 comments
▲
1
Show HN: Zagora, Distributed fine-tuning platform on mixed GPUs over internet
(app.zagora.ai)
by miyamotomusashi |
view
|
0 comments
▲
1
Cognitive architecture that hit #1 on LiveBench (68.5%) with zero fine-tuning
(truthagi.ai)
by felipemayamuniz |
view
|
1 comments
▲
1
Show HN: GEKO (up to 80% compute savings on LLM fine-tuning)
(github.com)
by SyedAbdurR2hman |
view
|
0 comments
▲
1
Benchmarking the best base small model for fine-tuning
(distillabs.ai)
by maciejgryka |
view
|
0 comments
▲
1
Show HN: 100% LLM accuracy–no fine-tuning, JSON only
(github.com)
by MysticBirdie |
view
|
0 comments
▲
1
Show HN: Courtyard – Open-source macOS app for local MLX fine-tuning Text
(github.com)
by tuwenbo0120 |
view
|
0 comments
▲
1
Deep-Dive into LLM Fine-Tuning
(fireworks.ai)
by smurda |
view
|
0 comments
▲
1
Show HN: TuFT – Multi-tenant fine-tuning platform with Tinker-compat API
(github.com)
by ekzhu |
view
|
0 comments
▲
1
Show HN: MadLab – A standalone desktop app for local LLM fine-tuning
(github.com)
by Archimedes1618 |
view
|
0 comments
▲
1
Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning
(github.com)
by slye514 |
view
|
1 comments
▲
1
Show HN: Simple, Fast, Accessible Fine-Tuning
(commissioned.tech)
by rbshamsu |
view
|
1 comments
▲
1
Fine-tuning open LLM judges to outperform GPT-5.2
(together.ai)
by zainhsn |
view
|
0 comments
▲
1
Show HN: LLM fine-tuning without infra or ML expertise
(tinytune.xyz)
by Jacques2Marais |
view
|
0 comments
▲
1
Training and fine-tuning an Artificial Intelligence
(github.com)
by tvali |
view
|
0 comments