News
Latest
Top
Search
Submit
Login
Search
▲
5
Free Beta: Fine-tuning SDK for LLMs, comments welcome
(hpc-ai.com)
by CrazyLLM |
view
|
1 comments
▲
4
Show HN: LLM fine-tuning without infra or ML expertise (early access)
(tinytune.xyz)
by Jacques2Marais |
view
|
4 comments
▲
3
Ask HN: Anyone Succesfully fine-tuning LLMs?
by Mythli |
view
|
2 comments
▲
2
A Survey on Federated Fine-Tuning of Large Language Models
(openreview.net)
by mldev_exe |
view
|
0 comments
▲
2
Show HN: FT-Lab – Lightweight TinyLlama Fine-Tuning (Full FT / LoRA / QLoRA)
(github.com)
by Sai-HN |
view
|
0 comments
▲
2
FT-Lab: A Lightweight Toolkit for Fine-Tuning and RAG Evaluation
by Sai-HN |
view
|
0 comments
▲
1
Show HN: QLoRA fine-tuning in .zse INT4 format by ZSE
by zyoralabs |
view
|
0 comments
▲
1
Qwen3.5 Fine-Tuning Guide – Unsloth Documentation
(unsloth.ai)
by bilsbie |
view
|
0 comments
▲
1
Bio-Inspired Adapters: Improving Models Beyond LoRA Fine-Tuning
(genbais.com)
by lazarko |
view
|
0 comments
▲
1
Fine-Tuning Qwen3 Embeddings for product category classification
(blog.ivan.digital)
by ipotapov |
view
|
0 comments
▲
1
Show HN: Zagora, Distributed fine-tuning platform on mixed GPUs over internet
(app.zagora.ai)
by miyamotomusashi |
view
|
0 comments
▲
1
Cognitive architecture that hit #1 on LiveBench (68.5%) with zero fine-tuning
(truthagi.ai)
by felipemayamuniz |
view
|
1 comments
▲
1
Show HN: GEKO (up to 80% compute savings on LLM fine-tuning)
(github.com)
by SyedAbdurR2hman |
view
|
0 comments
▲
1
Benchmarking the best base small model for fine-tuning
(distillabs.ai)
by maciejgryka |
view
|
0 comments
▲
1
Show HN: 100% LLM accuracy–no fine-tuning, JSON only
(github.com)
by MysticBirdie |
view
|
0 comments
▲
1
Show HN: Courtyard – Open-source macOS app for local MLX fine-tuning Text
(github.com)
by tuwenbo0120 |
view
|
0 comments
▲
1
Deep-Dive into LLM Fine-Tuning
(fireworks.ai)
by smurda |
view
|
0 comments
▲
1
Show HN: TuFT – Multi-tenant fine-tuning platform with Tinker-compat API
(github.com)
by ekzhu |
view
|
0 comments
▲
1
Show HN: MadLab – A standalone desktop app for local LLM fine-tuning
(github.com)
by Archimedes1618 |
view
|
0 comments
▲
1
Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning
(github.com)
by slye514 |
view
|
1 comments
▲
1
Show HN: Simple, Fast, Accessible Fine-Tuning
(commissioned.tech)
by rbshamsu |
view
|
1 comments
▲
1
Fine-tuning open LLM judges to outperform GPT-5.2
(together.ai)
by zainhsn |
view
|
0 comments
▲
1
Show HN: LLM fine-tuning without infra or ML expertise
(tinytune.xyz)
by Jacques2Marais |
view
|
0 comments
▲
1
Training and fine-tuning an Artificial Intelligence
(github.com)
by tvali |
view
|
0 comments
▲
1
Diversity Vs Density: A data strategy comparison for fine-tuning VLMs
(huggingface.co)
by silvervein |
view
|
1 comments
▲
1
Persistent Backdoor Attacks Under Continual Fine-Tuning of LLMs
(arxiv.org)
by PaulHoule |
view
|
0 comments
▲
1
Parameter-efficient fine-tuning in tinygrad
(dxuuu.xyz)
by todsacerdoti |
view
|
0 comments
▲
1
Why Your AI "Fine-Tuning" Budget Is a Total Waste of Capital in 2026
(noemititarenco.com)
by dvt |
view
|
0 comments
▲
1
I used RL fine-tuning to make an LLM generate ugly and unpythonic FizzBuzz code
(seantey.github.io)
by seanrrr |
view
|
1 comments
▲
1
Fine-Tuning Is (Probably) a Trap
(bits.logic.inc)
by sgk284 |
view
|
0 comments
▲
1
Fine-tuning Qwen3 at home to respond to any prompt with a dad joke
(medium.com)
by shutty |
view
|
0 comments
▲
1
Show HN: Fine-tuning Qwen3 at home to respond to any prompt with a dad joke
(nixiesearch.substack.com)
by shutty |
view
|
0 comments
▲
1
GPU Poor Continuous Learning: Making Agents Smarter Without Fine-Tuning
(ashpreetbedi.com)
by bediashpreet |
view
|
0 comments
▲
1
Generative Graph Vocabularies for Robust Graph Foundation Models Fine-Tuning
(arxiv.org)
by PaulHoule |
view
|
0 comments
▲
1
Show HN: FT-Lab – Lightweight TinyLlama Fine-Tuning (Full FT / LoRA / QLoRA)
(github.com)
by Sai-HN |
view
|
0 comments
▲
1
20x Faster TRL Fine-Tuning with RapidFire AI
(huggingface.co)
by ibobev |
view
|
0 comments
▲
1
Ask HN: Advice for getting into post-training / fine-tuning of LLMs?
by hedgehog0 |
view
|
0 comments