News
Latest
Top
Search
Submit
Login
Search
▲
7
Google identifies over 100k prompts used in distillation attacks
(cloud.google.com)
by carterpeterson |
view
|
4 comments
▲
3
Anthropic: Industrial-scale distillation attacks on our models by Chinese AI
(twitter.com)
by mudil |
view
|
1 comments
▲
3
CausalWan-Moe Preview: Applying Self-Forcing Distillation to Wan2.2
(hao-ai-lab.github.io)
by wlsaidhi |
view
|
0 comments
▲
2
Fair Use Paradox: Training and Distillation
(jasonwillems.com)
by jayw_lead |
view
|
0 comments
▲
2
Alleged Distillation Attacks by DeepSeek, Moonshot AI, and MiniMax
(twitter.com)
by mike_kamau |
view
|
0 comments
▲
2
Anthropic announces proof of distillation at scale by MiniMax, DeepSeek,Moonshot
(twitter.com)
by Jimmc414 |
view
|
0 comments
▲
2
Generalized On-Policy Distillation with Reward Extrapolation
(arxiv.org)
by fzliu |
view
|
0 comments
▲
2
Show HN: Symbolic Circuit Distillation: prove program to LLM circuit equivalence
(github.com)
by nsomani |
view
|
0 comments
▲
2
Fair Use Paradox: If Training on Public Data Is Fair Use, Why Not Distillation?
(jasonwillems.com)
by jayw_lead |
view
|
1 comments
▲
1
Show HN: Aside – Local meeting capture with vault-native AI distillation
(github.com)
by jphorism |
view
|
0 comments
▲
1
The Distillation Problem, It's Not a Cold War, It's Napster
(stickybit.com.br)
by TiMagazine |
view
|
0 comments
▲
1
Antidistillation for AI Openess
(antidistillation.com)
by rvttt |
view
|
0 comments
▲
1
Antidistillation preserves AI openness, originality, and safety
(antidistillation.com)
by umairnadeem123 |
view
|
0 comments
▲
1
Anthropic could be exaggerating about the distillation efforts of Chinese labs [video]
(youtube.com)
by logicprog |
view
|
0 comments
▲
1
Anthropic joins OpenAI in flagging distillation campaigns by Chinese AI firms
(cnbc.com)
by seydor |
view
|
0 comments
▲
1
Detecting and Preventing Distillation Attacks
(anthropic.com)
by meetpateltech |
view
|
0 comments
▲
1
In Forecasting, Search >> Distillation
(spylab.ai)
by polymorph1sm |
view
|
0 comments
▲
1
GTIG AI Threat Tracker: Distillation, Experimentation, Adversarial Use
(cloud.google.com)
by thread_id |
view
|
0 comments
▲
1
Retrieval-Aware Distillation for Transformer-SSM Hybrids
(arxiv.org)
by readitalready |
view
|
0 comments
▲
1
Distillation, Experimentation, and Integration of AI for Adversarial Use
(cloud.google.com)
by nsoonhui |
view
|
0 comments
▲
1
Quantization-Aware Distillation
(ternarysearch.blogspot.com)
by paladin314159 |
view
|
0 comments
▲
1
Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]
(research.nvidia.com)
by gmays |
view
|
0 comments
▲
1
Self-Distillation Enables Continual Learning
(arxiv.org)
by simonpure |
view
|
0 comments
▲
1
Nvidia: Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]
(research.nvidia.com)
by tosh |
view
|
0 comments
▲
1
Ministral 3 – pruning via Cascade Distillation
(arxiv.org)
by everlier |
view
|
0 comments
▲
1
Quantization and distillation effects on code LLMs
(arxiv.org)
by nkko |
view
|
0 comments
▲
1
A perfect distillation of the social uselessness of finance
(pluralistic.net)
by zdw |
view
|
0 comments
▲
1
Black-Box On-Policy Distillation of Large Language Models
(arxiv.org)
by Jimmc414 |
view
|
0 comments