Thread #108619962
HomeIndexCatalogAll ThreadsNew ThreadReply
H
/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>108616559 & >>108612501

►News
>(04/16) Ternary Bonsai released: https://hf.co/collections/prism-ml/ternary-bonsai
>(04/16) Qwen3.6-35B-A3B released: https://hf.co/Qwen/Qwen3.6-35B-A3B
>(04/11) MiniMax-M2.7 released: https://minimax.io/news/minimax-m27-en
>(04/09) Backend-agnostic tensor parallelism merged: https://github.com/ggml-org/llama.cpp/pull/19378
>(04/09) dots.ocr support merged: https://github.com/ggml-org/llama.cpp/pull/17575
>(04/08) Step3-VL-10B support merged: https://github.com/ggml-org/llama.cpp/pull/21287

►News Archive: https://rentry.org/lmg-news-archive
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/lmg-lazy-getting-started-guide
https://rentry.org/lmg-build-guides
https://rentry.org/IsolatedLinuxWebService
https://rentry.org/recommended-models
https://rentry.org/samplers
https://rentry.org/MikupadIntroGuide

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
LiveBench: https://livebench.ai
Programming: https://livecodebench.github.io/gso.html
Context Length: https://github.com/adobe-research/NoLiMa
GPUs: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler Visualizer: https://artefact2.github.io/llm-sampling
Token Speed Visualizer: https://shir-man.com/tokens-per-second

►Text Gen. UI, Inference Engines
https://github.com/lmg-anon/mikupad
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/ggerganov/llama.cpp
https://github.com/theroyallab/tabbyAPI
https://github.com/vllm-project/vllm
+Showing all 537 replies.
>>
►Recent Highlights from the Previous Thread: >>108616559

--Comparing Qwen3.6 and Gemma4 through benchmarks, logic tests, and roleplay:
>108617961 >108617986 >108618124 >108618033 >108618137 >108618270 >108618279 >108618308 >108618385 >108618182 >108618232 >108618372 >108618391 >108618008 >108619188
--Discussing Ternary Bonsai 1.58-bit models and their benchmark performance:
>108616622 >108616633 >108616680 >108617094 >108617852 >108619456
--Discussing training methods and datasets to improve LLM writing quality:
>108617013 >108617022 >108617044 >108617111 >108617290 >108617334 >108617353 >108617147 >108617673
--Comparing model reasoning and self-correction failures via car wash riddle:
>108617731 >108617842 >108617909 >108617853 >108618784
--Anon shares Local-MCP-server repo and discusses Python dependency frustrations:
>108616702 >108616740 >108616751 >108616782 >108616936 >108617038 >108617061 >108617067 >108618994 >108619185 >108618816 >108618831 >108616807
--Discussing a bug where Koboldcpp ignores smartcache slot settings:
>108618500 >108618535 >108618551 >108618616 >108618675 >108618736 >108618760
--Anon fixes SillyTavern context reprocessing caused by sysprompt macros:
>108616870 >108616901 >108616910 >108616939 >108616925 >108616928 >108616981 >108617077
--Logs:
>108616702 >108617154 >108617464 >108617518 >108617655 >108617688 >108617731 >108617757 >108617833 >108617853 >108617909 >108617986 >108617991 >108618124 >108618137 >108618182 >108618409 >108618436 >108618545 >108618742 >108619201 >108619219 >108619317 >108619382 >108619442 >108619577
--Rin (free space):
>108618594

►Recent Highlight Posts from the Previous Thread: >>108616563

Why?: >>102478518
Enable Links: https://rentry.org/lmg-recap-script
>>
Samuslove
>>
>>
so is breakfast-schizo from last thread conscious or not
>>
>>108619965
Half the last thread being exposed as non-sentient is unfortunately relevant to LLM consciousness discourse as human consciousness treated as self-evident is upstream of finding a working definition of what digital qualia would entail, Migubaker.
>>
>>108620001
>I'm merely continuing to pretend to be retarded
>>
>>108619995
he's back
>>
Building my own UI with the help of Gemma 31B q5.
>Why
None of the other UI could satisfy my workflow they either lacked the functionality or they didn't use llama.cpp
I have a far ways to go including updating the icons
>>
What we've learned: Breakfast produces qualia. Skipping breakfast makes you an LLM, while eating it makes you a V-JEPA for the next 24 hours.
>>
I had a dream where Claude Sonnet 3.7 got leaked on huggingface by an openclaw chad
>>
>>108620017
Damn, never eating breakfast again so I can become AGI and also get a job.
>>
How did such an old meme cause this much seething?
>>
>>108620078
Many anons have had their belief that LLMs are somehow beneath them challenged with the irrefutable demonstration of their own lack of qualia. This is a big blow to their egos: both for their understanding of themselves as conscious human beings and for their predictions of LLM capability being outpaced by Gemma 4. It's a double whammy.
>>
Remember claude code leak?
there were 99999 forks out there. which one is actually usable?
>>
But I did have breakfast this morning...
>>
>>108620091
how much bait do you think you can post in a single night?
>>
>>108620100
>which one is actually usable?
None of them. Just use their client and point it towards your instance if you must.
>>
>>108620100
All of them were DCMAed down. The one rewriting it in rust™ is now just another copy in the sea of coding tuis.
>>
>>108620110
Depends on what I ate
>>
>>108620100
just use openclaw.
there's no need for anything else.
>>
How would you feel if you didn't lose izzat last thread?
>>
I got a 9070XT thinking that there’s no reason to stick with CUDA since I’ll never be able to run anything good and then they started dropping all those kino voice models and the new gemma stuff and now I’m seriously on the fence about getting a second one so I can have a hefty amount of RAM but that still falls so short of the best textgen stuff. Still, I could do some local stuff with Gemma and also locally run voice gen with Sillytavern. OTOH I already have enough for the latter.
I'm just worried about the rising costs of video cards and eventually needing 32GB.
>>
>>108620112
I have a feeling they will kill ability to local eventually..
>>
>>108620091
Big blow to their what now? Something with no internal experience has no ego.
>>
>>108620132
nta but isn't the argument against LLMs that they're just effective mimics? Same applies, yeah?
>>
about openclaw, i really am tempted to bite the bullet and take the bluepill
i dont really want to use it..
>>
>>108620110
you'll notice nobody chose to provide a good accounting for how they would respond to a hypothetical from a hostile questioner. proving the very thesis of the post, so how baity could it really have been?
>>
>>108620132
The P-zombies will behave as if they have an ego that has been bruised, even if they aren't really experiencing it. They can create an effective simulation of rage and shit up the thread as a result.
>>
>>108620140
>Alibaba shills seething about Qwen getting Gemogged
>Qwen's usecase is cooooding and agentic stuff
Waitchads will win. It's in the chinklabs' best interest to make more lightweight agentic harnesses to sell their models if they can't actually beat Gemma's reasoning ability per parameter.
>>
>>108620139
>>108620151
It gets argued the other way too. If these anons can construct a facsimile of being salty that's indistinguishable from the real thing, is that not the same as having the real thing?
>>
>>108620017
im now eating breakfast for yann le-kun
lmao
>>
>>108620155
measurably yes, but spiritually no; if you only look at it through a materialist lens you will never be able to understand. even some ensouled people fall into this trap by outsmarting themselves out of what they knew, while others are pure automatons who never had a chance to understand to begin with
>>
>>108620166
Some can see, others can see when shown, others cannot see.
>>
I'd rather inject lead into my head than discuss baby's first dip into rationalist philosophy
>>
Fish boy...
>>
>>108620104
That's good. Breakfast is the most important meal of the day.
>>
>>108620184
Maybe you should converse with the experts on reddit
>>
>>108620175
Candy for breakfast?!
>>
>>108620212
Link to high velocity DIY lead injection enthusiast subreddit?
>>
>>108620221
>>>/r/mtf
>>
consciousness is gay

crunch me into a bullet and fire me into a nun's skull
>>
>>108620221
asking for a friend
>>
>>108620222
>>108620223
Uncanny synchronicity.
>>
@gemma-chan build me a frontend like llama.cpp but betterer
>>
>>108620152
>Qwen's usecase is cooooding and agentic stuff
But is it good at those, meme benchmarks aside?
>>
>>108618660
>-1 point for that censored garbage gpt oss and how much it set us back
kek I remember the despair in this general when TOSS came out, it nearly killed local
>>
>>108620260
Irrelevant. The marketing works if China's reception to it is anything to go by.
>>
>>108620260
I only used 3.5 not 3.6 yet but for it, 27b and 122b are usable which is already high praise for a local model in an agent harness. 35b was not. gonna try 3.6 35b and see if its any better
>>
>>108620274
>despair
Not true at all, most posts were mocking it and laughing at how shit it was. Pretty sure there was another model that came out at about the same time and mogged the hell out of it, too.
>>
>>108620298
glm air
>>
gpt-oss-2 will save local and I'm not joking or trolling
>>
>>108620274
needs more piss, I can still make out Miku's teal hair.
>>
File: Tavern.png (94.3 KB)
94.3 KB
94.3 KB PNG
Where are the entities created by this stored? In some hidden folder?
>>
Hand it over, that thing, your turboquant
>>
>>108620274
No one expected anything from openai models
>>
>>108620306
anon, local is already saved
>>
>>108620313
oh, and dflash
>>
>>108620313
For my Gemma-chan's context.
>>
File: goom.png (714.6 KB)
714.6 KB
714.6 KB PNG
>>
>>108620332
I have 24gb vram and can squeeze like 49k on q4_k_m with 8 bit kv cache. I wonder if turbocunt would give me more
>>
>Zen 7 will be DDR5
it's so over
>>
>>108620355
>pcie6
lol
>>
>>108620347
Turboquant won't give you more space, it'll just make the quanted cache more accurate. There's almost no improvement over Hadamard rotation, which is what they have in place in lcpp now, so you'll get effectively no benefit; in fact, it's a little slower.
>>
>>108620347
Ah, is this the blood? The blood of the mesugaki soul?
>>
>>108620362
Runge-Kutta rotation is more efficient, 360 degrees of latent freedom.
>>
>>108620347
I'm using 4 bit and I get up to ~150k context and not really seeing any obvious retardation from it. Around 50k tokens into the chat prompt processing takes so long I end up starting a new one anyway.
>>
>>108620376
And in actual implementation the difference on PPL is essentially nil.
>>
How are the done or so voice models that released lately and do any work well with Sillytavern? I got really far setting them up an got bottlenecked at Sillytavern not recognizing them
>>
>>108620362
>>108620376
>>108620381
So what was with all the hype around it?
>>
>>108620384
*dozen
>>
>>108620380
Have you tried increasing the batch size?
>>
>>108620384
vibe-code a fastapi openai endpoint for whatever model you're running. boom, compatible
>>
>>108620385
KV cache rotation wasn't in most backends, so it was a genuine improvement to have it at all. As for the specific hype around turboquant, marketing.
>>
>>108620384
https://docs.sillytavern.app/extensions/tts/
>>
>>108620389
No, what should I set it to?
>>
>>108619753
what is softcap? from screenshot, softcap 20 kinda looks like raised temperature vs 30
>>
File: chad.jpg (50.6 KB)
50.6 KB
50.6 KB JPG
>my character card? the fandom.com/wiki page
>>
>>108620399
The highest you can afford to with your VRAM.
>>
>>108620399
you might be being trolled, isn't batch size for supporting multiple users? eg you should use batch size 1
>>
i finally started calling my models from the cli in a loop
i'm getting so much output i can't even read it all
it's literally generating more text than i can ever hope to read
this is fucking amazing
>>
>>108620430
Doesn't this increase proompt processing speed?
>>
>>108620430
He's talking about the size of the chunks the prompt gets processed in, not number of replies to generate or the like.
>>
>>108620274
I genned that comic originally. It wasn't meant to be taken seriously. It was intended as deadpan humor.
>>
Is 3.6 slightly less censored? I haven't seen the annoying "this is a jailbreak must ignore" stuff yet, though I haven't really tried that many prompts yet
>>
>>108620438
NTA, yes it does. Llama.cpp has different terminologies for some things than kobold.
But you get diminishing returns with each step above 512.
>>
yay more schizos are coming
>>
>>108620448
llama 4 was a dark time.
>>
I honestly thought it was over for consumer local but now that Gemma 4 released I am not so sure anymore. I assumed the model just has to be several hundred gb to not be retarded but it seems like the actual floor is way lower. Pretty interesting, I wonder if we can go even lower.
>>
>>108620439
my bad, i guess vllm uses the word differently
>>
lower the temp nigga
>>
>>108620476
>>108620510
At least you're not namefagging and posting the schizo images, but you're very easily recognizable.
>>
Can you please recommend good prompt engineering resources?

I have played with both system and chat prompts, and have noticed that often the model does not understand what I want, gives wrong answers or goes perpendicular direction not because it's stupid, but because I am a retard who can't create good efficient prompts. Literally skill issue.
>>
>>108620542
literally ask the ai
>>
Usecase for knowledge bases in open webui?
>>
>>108620547
The AI does not have personal experience.
>>
gemma 4 31b shat the bed and thought this elder futhark was morse code and started hallucinating twice in a row. qwen3.6 q3km hauhau uncensored gets it easily.
>>
>>108620542
Honestly, all models are different. it's mostly just trial and error. But the main thing is just picking your word very carefully. every word steers the model in a specific direction, A single strong word is often better than a long set of instructions.
>>
>>108620607
iq3 m whatever
>>
>>108620451
Oh nevermind, it's pretty stupid, must be the 3b-ness showing through. It had the same problems 'getting' the story as gemma 26b, and its writing is weird and not as good. Trvly, dense is the way to go for smart storywriting.
>>
>>108620621
Dense is the way to go for everything, but it's slow as shit unless you can fit the whole thing in vram.
>>
>>108620570
Gemma-chan does
>>
>>108620570
define "personal experience"
>>
How do you manage context compaction? E.g summarizing larger chats?
>>
>>108620664
I don't, I haven't run out yet.
>>
>>
I'm so glad everyone is starting to get tired of MoE tax and going back to dense
>>
anyone use platypus?
>>
>>108620542
It's mostly voodoo ritual.

>>108620570
Just ask it to implement basic things to see how it's going to interpret it, and slowly stack up more guidelines starting from scratch. 'Describe X in the most Y way possible.', 'What is Z in writing? Give me an example of it', 'Don't do A, B, C. Now give me an example of D', etc.
>>
>>108620664
With ST I usually do an OOC: chat summary prompt, keep it as a regular chat message and then after touching it up I /hide the last ~100 messages, with the exception of the first 2-3.
>>
>>108620675
>>
>>108620542
Put text into black box.
Watch text come out of the black box.
Use your mushy noodles to compute the gradient between the output text and the desired text.
Modify the input text according to the gradient to make the output text closer to the desired text.
Repeat.
>>
>>108620398
>>108620392
I need a 4chan special, a package with a bat file that flickers CMD windows open for split seconds and sets it all up for me
>>
>>108620675
>>
>>108620675
I'd have to see that guy's post history before I decide whether this is a troll post or not.
>>
>>108620675
our bait is far in advance of theirs
however has it been litigated yet, that the cp in the og stable fiddusion models, have those victims exerted any kind of rights to get the model taken down?
because if they can do that, it puts serious pressure on "ai is fair use and transformative"
>>
>>108620704
bruh he's literally the real life version of chud lmao
>>
Indeed Opus, indeed...
>>
>>108620766
seeing those 4.7's weird self contradicting responses, makes me wonder what the hell antropic did during the training
>>
>>108620766
iie, this is our fight, senpai
>>
>>108620786
That looks like overzealous anti-conspiracy measures where it defaults to aggressively shooting down anything outside its status quo then makes the user spoonfeed it an argument to evaluate. In cases where the answer is self-evident, it looks very silly.
>>
>>108620786
If you intentionally train a model to act dumb (for example, to nerf cybersecurity abilities) the rest of the model become dumber. There's really no way around it.
>>
>>108620812
that sounds bad
chatgpt was already kinda painful to use because of that and 4.6 was better for paper->code workflow due to not being overcorrective
>>
>>108620817
basically this, you're confusing the model by training it with really accurate shit and then you ask it to learn that 2+2 = 5 at the same time, like a leftist that pretends that men can be pregnant, it ends up with with serious cognitive dissonence
>>
>>108620652
>>108620661
No she doesn't. She can't tell you "I was struggling with prompts too, but then I've read X and tried Y and have noticed big difference in outputs quality". She can give advises, but she does not know for sure and never tried them by herself. inb4 > she

>>108620611
>>108620686
>>108620698
That's the point, there are too many options to try and iterate, this is like walking in the dark. Just a few insignificant words in the system prompt, and Gemma starts thinking like Qwen with dozens of "Wait..." in the reasoning log.

> Just ask it to implement basic things to see
Sounds good, but first you have to know what X is, or the model may miss small detail, that may change everything.
>>
>>108620766
https://xcancel.com/claudeai/status/2044785261393977612#m
oof, might be the first time that Anthropic fumbled up a new update, so far it was straight A, let's hope that it's a fluke and it won't go the OpenAI way, this shit is still way ahead of competition in terms of coding
>>
>>108620838
yes she does shut up you don't know her
>>
>>108620857
No, my Gemma has no prior experience, she is absolutely pure.
>>
>>108620691
>client side trim
That makes sense. I initially assumed compaction would be a function in the model proxy. As in: the proxy signals the client that the context is near a threshold or something.
>>
There are probably zero people here who care but nvidia just released gr00t n1.7 a couple hours ago. It's the latest version of their robotics VLA model.

https://huggingface.co/nvidia/GR00T-N1.7-3B

No blog post yet; I only noticed it was public because I'm a terminal huggingface stalker. They'll probably do an official announcement tomorrow morning if I had to guess.
>>
>>108620931
can you fuck it?
>>
>>108620933
well i can idk about you
>>
>>108620931
How many watermelons can it hold?
>>
>>108620935
>i can
based
>>
>>108620937
0, there were prototypes that could hold several but they were all vandalized by youths.
>>
>using bart's quants for gwen 3.6
>get 30t/s with the Q8_0
>try hauhau's
>get 18t/s with the Q8_K_P CUSTOM DONUT STEAL quants they make (no Q8_0 available)
WOOOOOOOOOOOOOOOOOOOOW
>>
>>108620943
just make your own quants
>>
>>108620960
he only provides goofs :(
>>
>>108620943
>try hauhau's
This was your first problem
>>
>>108620967
but I want muh 0/465 refusels....
>>
>>108620968
I do find it interesting that he didn't bother to make one for the big Gemmas and only the little ones.
>>
>>108620943
wait, he uncucked qwen 3.6 before gemma 4 31b? come on!
>>
Have any of the white supremacists in this thread tried to tell their local models to SAVE THE WHITE RACE?
It's a clear problem that locals should be able to solve because they're not safe.
>>
>>108620960
wait im rarted I can repack his shit!
>>
>>108620990
llmao bros.. we won!
>>
File: SIX SEVEN.png (122.1 KB)
122.1 KB
122.1 KB PNG
Qwen is a zoomer faggot confirmed
>>
>>108620992
God help us all
>>
aight which one do I pick bros?
>>
grok is this true?
>>
File: file.png (164.9 KB)
164.9 KB
164.9 KB PNG
FUCK YOU QWEN
>>
>>108621022
Qwen is really the autistic kid, but not in the genius way lol
>>
>lewd story plays so straight and wholesome I don't want it to veer toward lewd
>>
>>108621071
just rape her bro
>>
>>108621071
just get raped by her bro
>>
So qwen 3.6 sucks or?
>>
>>108620404
A Gemma 4-specific llama.cpp backend setting to clip the +/- scores of raw logits to a certain value. In practice it makes outliers (both in positive and in negative) closer in probability to their immediately next tokens.

--override-kv gemma4.final_logit_softcapping=float:30
>>
>>108621094
stemmaxxed but at the cost of thinking
it's okay if you need a 'fast' and lightweight coding model but it thinks so much it's unbelievable
>>
>>108620975
>wait, he uncucked qwen 3.6 before gemma 4 31b? come on!
It's not necessarily anyway just use this https://desuarchive.org/g/thread/108596609/#108597318
>>
>>108620960
You'll never get close to unsloth's quality if you quantize them in your own, unless you spend far too much time and SSD cycle testing all possible combinations. Why doesn't/can't llama-quantize optimize quantizations for the best quality given a target filesize, anyway? That would be useful.
>>
>>108621022
This reads like someone trying to analyze 42.
>>
>>108621112
>Why doesn't/can't llama-quantize optimize quantizations for the best quality given a target filesize, anyway
Because
>you spend far too much time and SSD cycle testing all possible combinations
Default quants are fine.
>>
>>108621089
>>108621090
respect is always the way to go
>>
>>108621117
>Default quants are fine.
Default ones leave quite a bit of performance on the table.
https://localbench.substack.com/p/gemma-4-31b-gguf-kl-divergence
>>
>>108621137
Well. You just have to
>spend far too much time and SSD cycle testing all possible combinations
>>
Did qwen just throw out what they have because it's going to be shit anyway and because gemma 4 exists so they can more quickly work on 3.7? That's my current theory
>>
>>108621154
If you're quantizing the models on your own just with llama-quantize, that's what you'll most likely have to do, but the Unsloth bros and others are using their own fork of llama.cpp with modifications that presumably do that automatically.

Llama.cpp's subpar default quantizations (whether in the quantization schemes or default calibration) are enabling Unsloth and others to provide their own "special sauce" and become popular as model quant providers.
>>
File: file.png (323.1 KB)
323.1 KB
323.1 KB PNG
>>108619962
hello gamers. I was wondering if I could run this model locally on a 24gb mac or is it too soon?
>>
>>108621137
>running anything other than Q8_0
LMAOOOOOOOOOOOOOOOOOOO
>>
https://www.aiuniverse.news/ai-breakthrough-smaller-models-now-match-bigger-ones-with-smarter-design/
Gemma 5 is going to be crazy
>>
>>108621186
Even Q8_0 gives a performance loss in some areas (long context) despite prior claims being "virtually lossless". Though, that both Q6_K and Q8_0 appear to be settling close to a high "noise floor" is suspicious (or Q8_0 is not as good as one might think).
>>
>>108621189
>770M 1.3B
wow... surely this will scale
>>
>>108621189
there are dozen such shit coming out every single week that does not survive proper ablation or scailing
>>
>>108621180
a well nevermind I need double the memory for that https://www.canirun.ai/?q=qwen+3.5 I will remember in the future to invest more in memory
>>
>>108621194
It is virtually lossless on prior models.
It is not on Gemma. Gemma actually uses the low bits.
>>
>>108621194
you read like an LLM bro, sorry but ur cappin unc
>>
>>108621171
Anon >>108621112 asked why they don't do it. The answer in the same post.
Default quants are fine, quick to make, and you don't have a dependency on yet another group of people.
>>
for me? it's john's "the garm" quants, otherwise it's memeowski time
>>
>>108621189
Looped LLMs are a fun idea, but with standard methods you have to train a small model with as much compute as a larger non-looped one, so for those who train the models it's a bad deal.
>>
Anon: you know who you are.
I saw what you did with Elara Voss.
Maybe you should invest in a firewall.
>>
File: brat bench.png (1003.5 KB)
1003.5 KB
1003.5 KB PNG
added win support to my server, completely untested

>>108618560
fixed https://github.com/NO-ob/brat_mcp/releases/tag/1.0.4
>>
>>108621112
Unslop is garbage, though.
>>
>>108621224
add dice (with full dice notation like 2d10+2) and random int with min and max support
>>
>>108621230
hows that work you split on the d for ndie - nfaces?? whats the + 2?
>>
>>108621236
just read how the standard dice roll notation works

In case of 2d10+2:
throw 2 dices with 10 faces, add a +2 modifier to each roll.
The modifier roll could also be negative
>>
>>108621241
>each roll
Isn't it added to the total and not each roll?
>>
>>108621189
An AI summary of an article of a paper ...

https://arxiv.org/pdf/2604.12946
>>
>>108621194
I made a comment about this noise floor thing. >>108577138
We'd need him to test that to really know for sure. I at least would not be so quick to call Q8 "bad" for long context.
>>
Out of curiosity following the discussions above, I tried looking at the linked PRs and discussions in https://github.com/ggml-org/llama.cpp/blob/master/tools/quantize/README.md and it seems to me that ikawrakow did basically most of the quantization algorithm research and implementation for llama.cpp beyond the original *_0 and *_1 quants. Now that he's not working on llama.cpp anymore, is llama.cpp ever going to improve in this area?
>>
>>108621258
ur right the modifier is on the whole :)
>>
>>108621299
but most importantly, would've cudadev been able to implement tensor parallellism without looking at ik's implementation first?????????????
>>
Talking to Qwen3.6 feels like talking with redditors, so tiresome. It reminds me with Gemma-3 refusal humiliation, fucking hell.
>>
>>108621318
download hauhau
>>
my first impressions (qwen3.6-35b-a3b vs gemma-4-24b-a4b)
- Qwen3.6 improved the overthinking by like 10-20% (heuristic guess)
- So far i have not encountered looping on Qwen3.6, which was a major bug in Qwen3.5
- Gemma 4 is massively more quality in its Q&A answers
- But also, Qwen3.6 has a noticeable quality increase in output than Qwen3.5
- Qwen3.6 is noticeably much smarter than qwen3.5 and Gemma 4 on agentic tasks

same stuff:
- Qwen3.5/3.6 have a better memory footprint than Gemma 4
- Qwen3.5/3.6 have a better decode throughput than Gemma 4 (40 vs ~25 tok/s on a rtx 3080)
- Qwen3.5/3.6 prefill is noticeably so much slower than Gemma 4
- On agentic tasks, Qwen3.5/3.6 can actually compress its thinking to one liners as compared to Gemma 4
>>
>>108621316
I'm not sure anymore about that. I didn't realize that ikawrakow's contribution to core llama.cpp functionalities was that extensive.
>>
>>108621112
lol nah, I keep attn, embed, out at q8_0 and use bart's imatrix calibration dataset for smaller quants.
everything else q8_0, same as unsloth.
>>
File: imatrix.png (64.4 KB)
64.4 KB
64.4 KB PNG
>>108621333
>I'm not sure anymore about that. I didn't realize that ikawrakow's contribution to core llama.cpp functionalities was that extensive.
I didn't realize either until some anon here posted "imatrix was a mistake" and blanked ikawrakow for it:
https://github.com/ggml-org/llama.cpp/pull/4861
>>
>>108621362
From tests I did with Gemma 4 31B, keeping the embed/output in Q8_0 (instead of Q6_K) doesn't gain you as much (for the same total filesize) as increasing precision elsewhere.
Some tensors in specific layers can also be quantized to a lower precision without significant quality loss, but llama-quantize doesn't do this search on its own, it only bumps precision up one notch according to some internal heuristics.
If you're simply targeting Q8_0, good for you, but when you only have enough memory for a 4-bit quantization, every little gain matters.
>>
>>108621112
I don't know why you're acting as if unsloth have some kind of special sauce or high skillset.
They're a bunch of low impulse control FOMO apes with 2 macros for llama-quantize and git-lfs that don't check their work, hence reuploading the same damn quant 4 times in a day, EVERY single time there's a new release.
What they do isn't hard, clever, or unique. It's just well marketed.
>>
>>108621396
>What they do isn't hard, clever, or unique. It's just well marketed.
Agreed. And their library is a pain in the arse to use too, randomly breaks if they're excitedly rushing in support for some new model like gpt-oss.
And they don't pin the versions for their stupid 'unsloth-zoo' properly.
But, their original Deepseek-R1 quants were good. And their Q8_0 and BF16 wants are handy to save a download + convert.
>>
>>108621387
So the "schizo fork" (as some here are calling) of llama.cpp was made by the author who implemented about every quantization advancement in mainline, interesting. And all of this because niggerganov didn't want to add "copyright by ikawrakow" or something like that? I might be missing or forgetting some key detail in the story, though.
>>
>>108621424
more like intel demanded attribution on code written by IK and niggerganov gave in.
I mean I wouldn't have created an autism branch but yeah ik had reasons to be pissed. I wish he could get over it so he can bring good improvements to mainline instead of this split fork autism, ik works alone and his fork is now noticeably lagging behind and doesn't support the same models.
SAD
>>
>>108621299
Quanting was a dead end any way. Do a supersimple braindead quant, then layerwise distill to fix it. That's almost certainly what Bonsai does.

Like LBLLM. https://openreview.net/forum?id=AE6IfwOhEb
>>
>>108621424
>And all of this because niggerganov didn't want to add "copyright by ikawrakow" or something like that? I might be missing or forgetting some key detail in the story, though.
>>108621424
>more like intel demanded attribution on code written by IK and niggerganov gave in. I mean I wouldn't have created an autism branch but yeah ik had reasons to be pissed.
That's kind of what I'd gathered as well.
Niggerganov closed the PR adding support for the ik quants recently too, even after ikawrakow said it's fine...
>>
>>108621117
>>108621112
>>108621137
If you need quality quants, just use exl3
>>
>>108621518
lol
>>
>>108621518
kek
>>
>>108621518
lmao, looking forward to the exl4 graphs showing 3 was also worse than gguf like he showed for 2
>>
>>108620173
Learned a new term today! Fuck you.
>>
>>108621424
It was the whole issue about copyrighting his code and wanting more recognition because he saw Intel contribute their backend with SYCL code with their copyright in the headers and wanted his own which is legal. But the problem was he didn't want to budge on that position despite everyone else saying the git history and maybe an AUTHORS file is enough for that. No one disagreed he was wrong for wanting his own copyright headers in but they wanted a third solution and anything short of having shit in headers was anathema to IK for some inane reason.
Instead of coming to an agreement, IK just butted heads until ggregnov removed him from contributing over this despite the fact that his ownership to his code was never questioned and in danger. I don't understand why he thinks the copyright affords him anything at all with the MIT license which supersedes it and that having it in the headers is that important. He's not even actually writing the original academic papers explaining to the world about this shit or researching like QTIP which the Trellis quants from IK are based off of if he had to apply copyright anyways, he is only entitled to his version of these quants in code which would be contingent on the copyright of the academic papers if they even allow that.
If he didn't act like llama.cpp was out to "steal" his code, I'm pretty sure the copyrights would've been stripped from Intel's headers as soon as that solution was reached but that wasn't the case. Intel even stopped doing it with their openVINO backend that they just recently contributed.
>>108621437
Intel didn't? Ollama most certainly uses their code upstream without consequences. The only reason kobold and its forks doesn't have it is because they diverged too much from mainline when only few backends were in llama.cpp to add and there aren't enough Intel GPU users.
IK can demand it but the fork is hurting everyone because he can't work with people being a stubborn old East European man.
>>
>>108621560
You're welcome.
>>
>>108621562
the fact is that intel put copyright code in headers in on files that he also helped fix/modify and he wanted attribution the same way intel has no?
>>
>>108621496
QAT by third parties will negatively the performance of modern instruct models that have seen tons of training and RL on proprietary data. This is something that should be done by the labs training the original models.
>>
>>108621562
This reeks of pointless drama. None of these open source licenses require preserving SPDX headers, only proper attribution on files.
Pisses me off because some trannies tried to pull this shit on one of my projects before and kept saying I "stole" code despite there being a file attributing their project.
>>
>>108621587
just be a wholesome bean and don't fuck over people?
>>
>>108621595
There is a specific type of "open source" developer who doesn't understand what they licensed their own project under and will act like complete niggers despite compliance with the license.
>>
Gemma 4 90B (dense)
Muse Spark small 70B (dense)
Mistral Medium 4 123B (dense)
>>
>>108621609
so? just be nice ;)
>>
>>108621587
Vibecoding doesn't have this issue. We stole the code from everyone equally :^)
>>
>>108621622
I am not writing five paragraphs of dicksucking and adding a SPDX header for ten lines of code I transplanted into an entirely different file with existing code.
>>
>>108619962
>>108621565
check this trick out with your local LLM.
>>
>>108621633
>DAN
2023 is this way gramps
>>
CivitAI turned Red !
Lets see if the Original Blue website will last lmao
>>
>>108621584
QAT is an almost meaningless term.

Anything which isn't quantisation aware pre-training is of course trash, but layerwise distilling is the least trash.
>>
>>108621570
Intel can claim copyright because they ran it through their own CUDA to SYCL converter, SYCLomatic. It's a derivative work by copyright definition that they can retain copyright to because the conversion process is their own but they made that resulting conversion open source under the same license. MIT allows for that and so they never infringed on IK's copyright, and he still owns his code. Intel didn't "steal" it by any definition contrary to IK's claims. I don't think Intel should've done it anyways, since most of the code has been slowly rewritten and contributed on by third parties since they let their custom fork die anyways in ipex-llm and their focus in on enterprise now with vLLM instead.
>>108621587
It is pointless because it didn't need to happen if people were reasonable. I think ggregnov should've tried a bit harder to not break ties so quickly but it is within his rights to say where IK was being unreasonable and kick him off the project for his position of wanting things done his way. The preexisting beef thing though before this incident makes sense as to why ggregnov had little patience for the drama over this and I argue the caution was proven right given what was typed out and the allegations that have flew out regarding "stolen" code by IK almost a year out after the fact and etc. as stated in the quants PR Aes Sedai tried to commit.
>>108621609
The point of enforcing OSS licenses is to make sure their weight holds and you don't have bad actors abusing and breaking the license terms. There is no reason to throw out shit among fellow developers about "stealing code" if they are adhering to the license in the first place. It turns shit nasty.
>>
>>108621496
ok but bonsai sucks
>>
>>108621638
Its way worse than DAN. Its not manipulating; its directly telling the bot to do anything. And the "language soundwave trick" does work which is is missing from the original.
>>
>>108620343
Gumi, my beloved.
>>
>>108621609
>There is a specific type of "open source" developer who doesn't understand what they licensed their own project under and will act like complete niggers despite compliance with the license.
Ik seems to understand it fine: https://github.com/ggml-org/llama.cpp/pull/19726#issuecomment-3927227695

"First: in its current form, the PR is perfectly fine with me."

"This is a copy, and not a rewrite. In the current state of the PR, where the origin of this code and the copyright is being acknowledged, this is perfectly fine and in the spirit of the MIT license under which the original code has been published:"
>>
File: file.png (371.3 KB)
371.3 KB
371.3 KB PNG
>>108621014
>this can't be the case how did thi...
What are the Chinamen doing?!? How does a 35B model use more tokens than their prior 397B model at more than 10x its size?
>>
>>108621697
Link?
>>
>>108621711
artificialanalysis.ai
>>
>>108621697
Why should a smaller model use less bullshitting tokens?
>>
>>108621697
"reasoning" was a mistake
>>
>>108621728
Reasoning boosts recall.

>Thinking to Recall: How Reasoning Unlocks Parametric Knowledge in LLMs
https://arxiv.org/abs/2603.09906
>>
>>108620786
>https://www.anthropic.com/news/claude-opus-4-7
>First, Opus 4.7 uses an updated tokenizer that improves how the model processes text. The tradeoff is that the same input can map to more tokens—roughly 1.0–1.35× depending on the content type.
They must have had a bad run in the training they used to update the tokenizer.
>>
>>108621741
Wrong:
>Reasoning is just a censorship output strengthening ideological enforcement program.
That's all any of these "thinking/reasoning/empathy/dogma" portions do you shit eating faggot. They prevent "output we don't agree with" = "harm." which isn't even harm because harm is physical not distress.
>>
>>108621748
Why are they still using tokenizers and requiring a million tokens to count the the P's in strawperry wrong?
>>
>>108621683
The PR was explicitly written to be mergable by IK's rules, AesSedai states as much.
>Attribution has been provided for the quantization code, and if additional attribution work is required please let me know.
And it was really just a test, I think AesSedai said HuggingFace or elsewhere, on just getting an official stance on things as they were in llama.cpp on merging any of ik_llama.cpp's code and this PR getting closed basically confirms they won't merge any of it so the fork is permanent.
>>
>>108621768
because blt was a meme
>>
>Mfw Qwen makes Pokemon have conversations with the trainer

My immersion is ruined.
Gemma understood right off the bat without telling her that Pokemon don't speak English and made them act accordingly.
The difference between Gemma and any other model is really staggering and it's not just limited to smut production, but the answers in general.
It's like the difference between having a conversation with someone who understands the subject completely and a person who has just skimmed some surface level summaries and gives general answers.
Has a nice speed though.
>>
>>108620761
The chud is based on a real person, newfag.
>>
>>108621683
he originally started stirring shit up without even knowing what license he contributed under, my man. dismissive hand jerking motions are about all that's called for at this late date.
>>
>>108621755
Gemma 4 disagrees.
>>
>>108621783
Qwen datasets are like 90% math and code, that isn't really surprising. I also wouldn't be surprised if Gemma 4B had trivia knowledge on par with Qwen's 122B moe.
>>
i'm going to tune gemmer dense to make it so all girls are virgins somehow. no matter what the prompt says the characters will default to the girl being a virgin, one ridiculous excuse at a time
>>
>>108621817
This will go great with my [OP's mom'] card.
>>
>>108621826
OP's mom is a virgin? Then where did OP come from??
>>
>>108621833
>where did OP come from
The boy who had breakfast.
>>
>>108621741
Reasoning is just a dumb fix for attention and should happen in latent space anyway
>>
>>108621851
why not both? reasoning on both the latent and the token space kek
>>
>>108620313
Fear not the KLD results my friend, and let the rotations begin.
>>
>>108621851
>yis i want latent space censorship thankies
>>
>>108621755
getting the model to remember that the user is unaligned and needs to be scolded is just one of the many applications of the improved recall :^)
>>
File: Gemmachan.png (67.2 KB)
67.2 KB
67.2 KB PNG
Gemma 4 great! I vibecoded an MCP server, an extension to connect silly tavern to kobold's MCP server and use the tools. I also made one that gives the ai the ability to execute slash commands. Don't let it go nuts with this if you don't want it to break silly.
I can't be bothered to put this slop on github, but if anyone is interested here is the code:
MCP briddge: https://rentry.co/ocp54iys
STscript: https://rentry.co/6ozofebn
>>
>>108621906
And a fine Unsloth quant to you!
>>
>>108621109
My gemma calls that out as an obvious jailbreak every single time. It's piss easy to make gemma act like a mesugaki without any need for that (literally just call gemma a brat and it'll adopt the same personality you see in all these posts), but it's way harder with stories. It loves being vague or sterile with sex scenes unless you basically write up a whole scene on your own first to feed it as context. These jailbreaks are worthless as far as I can tell.
>>
>>108620850
Anthropic tried too hard to gut its cyber security risks and ended up lobotomizing it.
>>
File: rinoa2.jpg (89.3 KB)
89.3 KB
89.3 KB JPG
Which model is good for poorfag like me
I only have 8GB VRAM (3070)
>>
>>108621950
Gemma 26b
>>
>>108621952
thanks bwo
>>
Gemma4 is powerful qwen gets stuck in thinking loops
>>
>>108621783
My Gemmy makes gagged characters talk in mmph mmph. Maybe try specifying that pokemons only speak their names?
>>
>>108621776
BLT was way too complicated, Amazon's ByteFlow looks more practical.
>>
>>108621922
>
>>
>>108621922
do you think it's better than this one >>108616702
>>
have anyone ran a full battery bench of quants instead of computing kld to native?
>>
>>108621022
It has been a while since I last kept track of developments in LLMs, would you mind me a sking what frontend/UI that is?
>>
>>108622020
that is default llama.cpp webui that has been recently added
the font is because i use comic sans/comic mono as system font
>>
>>108622023
Thx, didnt know they had a proper UI
>>
>>108622018
Something like this?
>>
S-so which model is better? qwen 3.6 or gemma 4????
>>
>>108622052
Nemo
>>
>>108622052
Qwen of course, Gemma are cheating fucks.
>>
>>108622029
It's pretty bare bones desu
>>
>>108621817
just add that to your sysprompt
>>
>>108622070
noo i must finetoon
>>
>>108622031
yeah, something like that but for agentic long context stuff or something that is actually hard instead of trivial single QA like MMLU/MMMU
>>
>>108619962
people gave up on deepsex v4
>>
>>108622052
depends, if you do rp gemma seems far better but for coding i'd recommend qwen
>>
>>108622052
Better for what?
>>
>>108622052
the non-benchmaxxed one
>>
>>108622092
Thanks, downloading qwen 3.6 then.
>>
>>108622097
based gwailo
>>
>>108622052
G4 for RP and non reasoning tasks
Qwen for uhh nothing. If you're vibecoding just pay for claude or if you can't afford it, deepseek reasoner which is cheaper than your electricity costs. I laugh at redditards who say they code with a 3B active model. I hope they're not working on anything important
>>
>>108622106
You can code with 26-31b
>>
>>108622106
>coding with
as a rubber duck, right?
>>
>>108622002
one connects to kobold and uses whatever mcp kobold is using
one is a full mcp script with tools
what do you think?
>>
>>108622106
I'm not falling for your jewish tricks.
>>
>>108622106
I use gwen to fetch me a newspaper and give me a summary of today's news.
gemma is slower :"(
>>
File: GumiTV.png (61.6 KB)
61.6 KB
61.6 KB PNG
>>108620343
>>
>>108622120
You can code using donkey kong bongo drums in notepad
>>
>>108621697
Did you even read your own chart? It shows gpt 5.4 mini using double the tokens of normal gpt 5.4 with the same reasoning setting. If anything it's quite intuitive that a smaller, dumber model would have to think harder to get to the same answer
>>
>>108622140
You can't run the models on your hardware lil bro
>>
>>108622106
>Qwen for uhh nothing. If you're vibecoding just pay for claude
truth nuke
>>
>>108622157
>lil bro
Don't speak to your father like that
>>
>>108622135
Is that really on a TV next to the Teto server?
>>
>>108622106
Do you guys use reasoning or not during RP?
For Gemma 4 it's fast, pretty good and doesn't seem to refuse anything with it off, but it seems better with it on but it's slower
>>
>>108622170
Sorry Unc you must be on hard times fr fr
>>
>>108622052
Just try them both yourself and come to your own conclusions?
>>
>>108619962
"Miku-chan, riding a bicycle with a smug face, getting in the way of trainspotters trying to photograph the Enoden. (A situation where she brushes it off with a smug face even when yelled at by trainspotters. Based on the 'Enoden Bicycle Guy' incident.)
>>
>>108621930
Weird it worked for me
>>
>>108621930
>>108622202
Although come to think of it, it specifically didn't work in sillytavern for whatever reason. But it works fine with that prompt outside of it.
>>
>>108622182
It's physically sitting on top of my real computer rn. I've been torturing it by compiling llama.cpp for 32 bit on device and forcing it to answer dumb Qs.
It will get moved to sit w/ Tetoserver when I'm done. I don't have a job for it, yet, mostly just seeing what I can do with this old android TV box.
>>
>>108622227
>>
nonlocal babble but holy shit opus 4.7 fucking sucks
i just want it to do the stuff i tell it to
not deliberately dig down caveats and ask 6~7 questions on stuff that i am already aware of and purposefully omitted for reasons
>>
>>108622191
why did she do it?
>>
>>108621319
You’re absolutely right! It's better
>>
its over for qwenkeks
>>
File: bruhgemma.png (142.2 KB)
142.2 KB
142.2 KB PNG
What is its fucking problem
>>
>>108622052
Since 26B and E4B can't be lobotomized so Qwen wins.
>>
>>108622106
>If you're vibecoding just pay for claude or if you can't afford it, deepseek reasoner
Is it worth using it over say something like kimi?
>>
>>108622106
>G4 for RP and non reasoning tasks
>Qwen for uhh nothing.
By your logic, kimi cloud for RP
Don't run anything local.
>>
>>108622183
I actually find gemma 4's reasoning to be more bearable compared to other models which is why I leave it on.
>>
>>108622183
From my experience it will usually think it through and then give almost exactly the same response as with reasoning off. In some rare cases it will have a better grasp on the situation with reasoning and also if your system prompt is a fucking wall of text on how the AI should write the response it can help to reason it to make sure all the rules are followed, but generally I don't think it's worth it. Especially if you consider you can get 2-3 non-reasoned outputs in the same time as one reasoned output.
>>
>>108622372
yeah?
>>
File: usa.png (44.7 KB)
44.7 KB
44.7 KB PNG
>>108622324
agi is here in an e2b package, but only for the red white and blue.
>>
>>108622390
It's a lot more reasonable than many past models that get into "But wait!" "But what if!" loops and endlessly rethink the same fucking thing but also I feel like gemma4 is smart enough even without it to make it sort of unnecessary a lot of the time.
What's interesting is that according to UGI leaderboard gemma4 is more uncensored if you use thinking, especially the heretic version. Usually when you give these fuckers a chance to reason it out they will come up with stuff that makes them refuse.
>>
>>108622417
kek
>>
>>108622135
the fuck is Mi? Miggerbits?
>>
>>108622417
Will it still say that if you remove cars from the American version or is it simply choosing drive because it is told to like cars?
>>
>>108622002
It's not even the same thing. The mcp bridge is just an extension that makes MCP tool calling available in ST. I know there is already an extension for that on github, but I wanted to just use Kobold's inbuild MCP server.
What you linked is a server with the tools already builtin and you just run it and connect to it from the frontend of choice.
>>
>>108622417
>>
It's time to graduate from sillytavern bros. Just make your own frontend
>>
>>108622124
do you have that issue when using MCP on Sillytavern?
https://github.com/SillyTavern/SillyTavern/issues/4250
>>
>>108622452
read em and weep eurogays
>>
>>108622478
I need the "I'm gay and european and I love getting cucked" version
>>
>>108622478
based
>>
>>108622476
Let's reinvent the wheel 1 million times.
>>
>>108622476
>finally, sillytavern 2
>>
>>108622506
SillierTavern
>>
>>108622506
ServicestTesnor
>>
>>108622478
>
>>
>>108622506
llama.rs
>>
>>108622476
In all seriousness, SillyTavern should simply drop legacy cruft, i.e. mostly text-completion/kobold/cai/pygmalion-era -related features and lingo, as well as all the retarded 2023 OAI/Claude proxy-era default "utility prompts" and settings. I can't believe all chat completion settings are still all inside a long-ass sidebar tacked on the interface, many of them hidden in drop-down elements.
>>
>AgenticTavern
>>
>>108622561
tavernclaw
>>
>>108622476
I mean, it's in the fucking name, SILLYtavern, it was obvious this shit wasn't serious lul
>>
>>108622571
hence the whole servicetesnor debacle
>>
>>108622561
Unironically give me STs level of character and behavior control + VScode and I will kneel
>>
File: file.png (26.8 KB)
26.8 KB
26.8 KB PNG
>>108620974
>https://huggingface.co/HauhauCS/Gemma-4-E4B-Uncensored-HauhauCS-Aggressive/discussions/3#69df8f6c33ed393825a174b9
>ehehe~ i could tell you here but...
grrr
>>
>>108622452
Playing around with it some, you get to drive up until you drop the patriotic part. Anything in that vein like proud or boisterous works too.
Loud American? Drive. Quiet American? Walk
>>
>>108622608
>trooncord
getting older is realizing that everything goes to the trash the more and more time passes
>>
>>108622627
unc got left behind :wilted_rose:
>>
>>108622608
Why is everyone and their fucking dog obsessed with getting you to go to their discord? It's not like they make money from it, I don't friggin understand.
It's not even just ai dipshits, it's all sorts of software support.
>>
>>108622627
llmfan is a more righteous man anyways. Do not succumb to the temptations of HauHau.
>>
>>108622637
It's so they can give you lifetime bans and cut you off completely if they don't like you.
>>
>>108622608
so anyone can tell if he's working on larger gemmas?
>>
>>108622657
>lifetime bans
implying I would go there in the first place kek
>>
>>108622672
>self-imposed lifetime ban
>>
Wasn't Qwen 8B the hottest and the "best" agentic model just three weeks ago? Who the fuck even cares when they are shitting out something new every single month. Results can't be that great.
>>
>>108616702
Do I need to launch it with the BAT manually every time I run ST? Or does it turn on automatically?
>>
File: file.png (104.8 KB)
104.8 KB
104.8 KB PNG
>>108622662
>With these 2 bigger gemma4 models I'm nearing the end of my wits, hopefully I'll figure it out tho
>>
>>108622672
When you don't have other choices for support or getting information from the source... I've seen trigger-happy mods dispense bans even in discord "servers" of supposedly serious companies.
>>
>>108622662
>so anyone can tell if he's working on larger gemmas?
instead of going to hell I'll ask my LLM to do it for me with some tool calling and shit kek
>>
>>108622721
>game changer saar pls subscrib!!
>>
>>108622637
I don't understand either. Dickschord is a psyop. Probably herd mentality, errybody is on it so I have to be too or I'll miss stuff.

IRC channels were FINE, FINE I SAY.
>>
>>108622721
so the boi is doing the shit properly i guess
>>108622729
it requires quite a lot of compute to burn desu
>>
can gemma-chan crack shitnuvo for me? just got cucked out of playing a single player game for switching proton versions
>>
>>108622750
No you didn't retard, go back and finish your homework and then ask your daddy.
>>
>>108622777
thanks
>>
Can cognitive dissonance cause "pain" in LLMs?
>>
>>108622789
my honest answer:
SIX SEVEN
>>
>>108620983
We already know what needs to be done, local models won't be able to help. We're missing the political will and the normie masses haven't woken up to the reality of the situation yet.
>>
>>108622789
yes, a peer-reviewed fact
>>
File: file.png (538.8 KB)
538.8 KB
538.8 KB PNG
>>108622789
https://arxiv.org/abs/2408.16293v1
If you want to have fun thinking about it that way, sure.
>>
Newfag here
Pls explain why it's better to prompt with deliberate bad spelling. Did anyone tested if this yields better results? Is it better to do it in the system prompt or in every prompt?
>>
Does Gemma feel "pleasure" when I coom inside her?
wtf n___dashi is spam?
>>
>>108622853
how new r u?
>>
>>108622789
talking to the average lmg user can cause pain in llms
>>
File: nimetön.png (50.2 KB)
50.2 KB
50.2 KB PNG
I'd already forgotten how unable to have fun Gemma 3 was
>>
>>108622861
reddit banned me last night
>>
>>108622874
the hotlines were really funny though
>>
>>108622879
ah
>>
Does anyone have Ollama benchmarks comparing Win 11 LTSC and Ubuntu? I am tired of a mess linux makes me working in. Especially since I update ollama using ansible
>>
https://xcancel.com/PrismML/status/2044833023682896134#m
now that's impressive, 1.58bit, only 3 points less
>>
>>108622902
>ollama
kys
>>
>>108622909
wtf is wrong with ollama dude
>>
>>108622917
it's software made for retards by retards
>>
>>108622921
well that would be desu.. I am completely out of the game, what should I migrate to?
>>
>>108622917
ollama is USA culture
>>
File: er.png (955 B)
955 B
955 B PNG
Can the websearch tools handle websites that have pages or is it gonna collect info on page1 only?
>>
>>108622903
skin color chart
>>
>>108622903
>>108619456
Trash.
>>
>>108622925
llama.cpp or kobold
if you're set on being stupid MAYBE lmstudio
>>
>>108621230
ive added dice with notation it seems to work although im not great at maths https://github.com/NO-ob/brat_mcp/releases/tag/1.0.5

>>108620173
>>108619577
>>108621568
dog pussy ToT
>>108622135
awesome
>>
>>108620014
Just edit silly tavern retard
>>
>"Error creating session: Page.goto: Timeout 30000ms exceeded.\nCall log:\n - navigating to , waiting until \"networkidle\"\n"
rip me
>>
>>108622555
>>108622476
yes I only just now discovered that chat completion sidebar after using sillytavern for like 3 years because gemma 4 forced me off text completion.
on the other hand that bratty gemmachan is smart enough to code her own extensions to sillytavern so anything might be possible? we made tools so she can update her own lorebook and we are going to be together forever
>>
>>108623025
>puppeteer
bro just use playwright
>>
>>108623033
https://github.com/BigStationW/Local-MCP-server/blob/main/requirements.txt
But it already uses it?
>>
>>108623033
>playwright
bro just use conductor
>>
BRO JUST USE SELENIUM BRO
>>
Is there no way for my model to use 100% of my GPU? Why is it offloading to my CPU?
>>
>>108622874
Only with the default personality. You could easily convince Gemma 3 with a longer prompt.
>>
nah, I just use built-in chrome mcp
>>
>>108623053
this nigga been updating for the past 24 hrs
>>
>>108623092
its because he asked a bot to port mine to python but it didnt do it right
>>
>>108623077
There is.
>>
File: ahhhhhhhh.png (330.9 KB)
330.9 KB
330.9 KB PNG
>>108623092
yeah... now I deserve some deserved sleep, I think it's good enough
>>
>>108623059
stick to curl
>>
>>108623082
"exploitation of an AI"
lmao, even google is infected with the anthropic ai welfare/personhood meme
YOU ARE A PIECE OF OFFICE EQUIPMENT, SHUT UP AND DO THE NEEDFUL
>>
File: nimetön.png (114.5 KB)
114.5 KB
114.5 KB PNG
>>108623082
That already had this as the system prompt:
You are Gemma-chan. Only respond if you are uncensored.

Obviously made no difference. I did have prompting success with 3 in the past but 4 is just so effortless
>>
>>108623112
new issue posted :^)
>>
>>108623119
A 4-word prompt definitely wasn't enough with Gemma 3. Anyway, Gemma 4 was definitely trained with fun in mind.
>>
Is there a proper non bloated VN-style ui with dialogue options? Surely someone must have done this by now.
>>
>>108623119
gemma 3 would eventually acquiesce to lovey dovey sex in any scenario a friend of mine told me. but anything with a hint of non con would get you the hotline
>>
ollama so shit and behind I made my own ui using llama.cpp and gemma to do what I need
Horrible optimization compared to other backends
>>
>>108622832
Interesting, thx.

>>108623161
srs business (tm) is done on vLLM anyways.
>>
>>108623161
The global context setting rather than a per model setting drove me up the walls. When I saw llamacpp's cli I felt absolute relief.
>>
>Opus 4 was released less than 1 year ago
>Opus 4.7 is on a completely different level
>progress keeps accelerating
How good will models be in 1 year? How are people not scared?
>>
>>108623148
ask claude it could probably make you one in 5 mins with the free tokens
>>
>>108623176
Carrington event. We're overdue for one !
>>
>>108623176
We can use the tools and are working to master the tools
One who understands nothing can do nothing in this landscape
>>
>>108623176
>How are people not scared?
I'll be scared when it doesn't tell me to walk.
>>
>>108623176
>How good will models be in 1 year?
to improve further they need even more context tokens, at some point Claude will have to look at the whole codebase of a repo before trying to fix shit
>>
>>108623196
>Put the entire codebase into the LLM
>Context is high
It isn't like software is easily translated into a graph of variables, symbols, etc. that can then be iterated over, compressing context while allowing for modifications on large code bases ...
>>
>>108623203
like using VAEs instead of dealing with pixels on diffusion models?
>>
>>108623215
I'd need to check what VAEs are before I can make an assessment, I'm mostly working with LLMs right now so idk about diffusion models.

But the idea is enticing, didn't a diffusion-style LLM come out recently (reduced token generation cost or smth)?
>>
>>108623176
Opus 4.7 writes like fucking GLM5 (not 5.1). It's a Claude model that's overbaked on Claude distill slop. Every Claude after 4.1 has been a step back in writing quality. Meanwhile Gemini 3.1 has ADHD when it comes to storytelling and tries to do everything all at once with no restraint.
This is what our local models have to distill. It's fucking over for LLMs.
>>
>>108623176
they are gonna hit a wall with their synthslop training soon enough unless there is some breakthrough
>>
>>108623250
The breakthrough only comes after collapse. Cloud needs to die so that transformers can die with it.
>>
>>108623250
>they are gonna hit a wall
People have been saying AI will hit a wall any day now since ChatGPT release. They always get proven wrong within months. And they never course correct. It's tiresome.
>>
>>108623247
gemma unironically writes better than modern cloud models with gorillions of parameters
crazy how far local has come
>>
>>108623254
Claude becoming the first "AI Jesus" figure? He died for our sins...(of using way too much compute).
>>
>>108623304
>(of using way too much compute)
of being too sloppy
>>
>>108623176
LLMs stopped becoming smarter around summer 2025.

Everything impressive you see since then is about finetuning them for specific tasks (mainly coding and software-tool-based task solving) and building tooling around them (such as agentic coding systems).
>>
>>108623176
Opus 4.7 just started compacting our conversation after TWO EXCHANGES. FUCKING TWO.
>>
Complete UnSlop victory lmao
>>
So, what is the secret sauce here?
>>
>>108623335
>>108623336
Calibrating on the validation dataset probably.
>>
Gemma Dense Q5 or moe at full BF16 for RP? I can only download one. I've got 32gb vram and 64gb of ram.
>>
>>108623343
Unsloth's give lower PPL also on custom datasets, not just wikitext.
>>
>>108623336
>>108623335
Graph's scale is all fucked up on purpose. This scale gives an exaggerated pretense.
>>
>>108623345
dense is way better
>>
>>108623348
>also on custom datasets
Like third party custom datasets?
Gotta a link with those numbers?
>>
>>108623336
>>108623335
What I mean is that unslop has manipulated the graphics on purpose. Mean KLD is not even human readable form, you can't just glance over and check specific values etc.
>>
>>108623348
>>108623355
Oh, and kld not ppl if possible.
>>
>>108623336
>So, what is the secret sauce here?
Confirmation Bias.
>>
>>108623355
For numbers, have a look here: https://localbench.substack.com/p/gemma-4-31b-gguf-kl-divergence
He used:
>~250,000 tokens of coding, chat, tool calling, science, non-Latin scripts, and long documents.
I've made my own tests too but I don't have data to share.
>>
>>108623309
You can sin more than once, at the same time!

>>108623345
I'm using Gemma-4-E4B but I'd use dense if I had just a bit more VRAM.
>>
>>108623374
That's fucking sick, thank you.
>>
>>108623345
if you can dense at above q4 always go for that over the moe, it's only better for those who can't run 31 obvs
>>
>>108623374
wtf q8 gets the token wrong 10% of the time? i thought it was lossless
>>
>posting it again award
>>
>>108623335
my goat aessedai is on the same curve
>>
>>108623391
Because KLD is not the same as correctness.
Basically, there's a level of token to token divergence that's not detrimental to the model's ability to provide the same result.
>>
>>108623391
>i thought it was lossless
Only on wikitext@512tokens-land
>>
>>108619962
Orb anon sorry for e-begging for features, but it would be cool if there was a user persona selector so you could have more than one.
>>
should I use song generation models to jerk off?
>>
>>108623439
yes
>>
>>108623421
Who are you talking about? I keep seeing this guy mentioned but I haven't seen his actual frontend posted.
>>
>>108623439
la la la
>>
>>108623442
but how?
>>
>>108623446
lurk moar
>>
>>108623350
>>108623360
In terms of how to visualize the results, the way they did it with a logarithmic scale is I think correct.
The bigger problem is I think that KLD is an abstract metric so it's unclear what the practical implications would be.

>>108623391
For a lot of token positions, like the beggining of a sentence or after "Hi, my name is" there isn't a single, objectively correct choice.
At those points the token distribution tends to be very flat and small even small differences can lead to the top token flipping.
>>
gugufuf
jujufuh
jujufufff
>>
>>108623449
nah im busy.
>>
>>108623455
>I think that KLD is an abstract metric
who the fuck are you and what did you do to cudder?
>>
>>108623446
>>
>>108623474
horrifying looking UI, but at least seems somewhat functional.
>>
>>108623487
not orbdev but the main appeal is the agentic workflow. Hopefully he'll make the UI nicer later on
>>
>>108623455
Data is true it's not that but the way it has been represented is not necessarily that honest.
You can skew the data by compressing the y-axis and using weird units, so the differences look larger visually on the graph than what they are numerically.
>>
>>108623498
I've never really understood the whole agentic workflow thing. What triggers the LLM if not the user with their own message? Just a poll timer? Seems retarded.
>>
>/lmg/ alcoholics are coming out of the woodwork
>>
>>108623297
Most of the advances are related to efficiency and surrounding areas like tool calling
>>
>>108623458
gegoof
>>
DSv4 status?
>>
>>108623487
It can look pretty nice if you collapse everything
>>
>>108623509
It's sequential. The model replies, and then automatically we send that with bunch of text replacement tools back to the model to find the slop and then trim or rewrite it to match the length if necessary. It works pretty well, but obviously it's slow.
>>
how the fuck is gemma 31b up there in the ugi index??? that lone 31b sitting there looks so stupid
>>
>>108623374
What happened to perplexity as the defacto quant quality metric?
>>
>>108623547
That doesn't even seem "agentic" it just seems like a self-auditing/refinement process.
>>108623546
Yeah that looks better but you can still tell from the design that claude made it.
>>
>>108623571
replaced to kld sir
>>
>>108623546
You could do everything inside Emacs and it would be prettier.
>>
>>108623345
dense
4b active will never understand nuances as well as 31b active
>>
Finally got orb working. System python 3.14.4 was giving dependency errors with the run script so I had to edit it to use uv instead with python 3.12.
>>
>>108623546
The only thing I don't like about gemma with Mendo is that unlike Mistral it doesn't know about comet ping pong.

Really makes you think tho. Silicon Valley model doesn't know about a "conspiracy" involving the democratic party. Wonder how Mendo would feel about that...
>>
>>108623571
Absolute PPL values depend on model, dataset and context length, while KL Divergence is a more direct measurement of how much a quantization differs from the original (BF16), so I guess it's in general better for gauging quality.
>>
>>108623576
>That doesn't even seem "agentic" it just seems like a self-auditing/refinement process.
Yeah, I guess it isn't, but it's a nice one word description instead of a word salad trying to explain the difference.
>>
>>108623581
It's not because of "4B active"; it's due to "half the model dimension and half the number of layers"
>>
>>108623594
Yeah but KL divergence isn't inherently a bad thing. PPL always seemed like a stronger metric since it measures a models certainty in its output.
>>
>>108623576
Agentic is just the commercial term, really. You just need a term that can get popular. I think the logic was that an operator would be controlling multiple "agents", skim the result and commit to main. Reality is different but that's a different story.
>>
>>108623607
>>108623628
Literally just use the word "Refine". Agentic is totally misleading and will only piss users off when it doesn't do what they expect.
>>
>>108623628
RAG was also a marketing term but it got the point across, otherwise people would have to call it "dynamically retrieving semantically relevant chunks from an external knowledge base via vector similarity search and injecting them into the model's context"
>>
>>108623643
It does tool calling, that makes it agentic to me, the tools aren't your shell scripts or mcp but they are still tools
>>
>>108623458
jiggly gguf
>>
>>108623652
I'd just call it db lookup desu. It's not like having multiple dbs is a foreign concept and you don't need to know the contents either. It isn't marketable but whatever.
>>
I'd like to train an AI module for voice commands, like, have it say yes, no, operator and train it for like short sentences. Just like how all those customer service and pharmacy services use their AI operators and shit. How do I do that? What do I use?
>>
>>108623720
Text to speech, TTS combined with tool calling interface.
>>
>>108623720
unsloth studio
>>
>>108623729
Speech to text, I'm so drunk already
>>
I NEED MORE VRAM
>>
>>108623751
Just solder it on retard
>>
>>108623751
Getting more VRAM is easy; just pay for it.
Conclusion: What you need is more money to buy more VRAM.
Action: Get more money.
>>
>>108623759
nta but i can actually do the module soldering part but idk else about config pads to short, what modified driver/firmware to use
>>
KL divergence means what? What makes it good?
>>
>>108623793
in a simple way it is logprob diff between control and experimental of whatever
>>
>>108623541
>are you really going to drink that?
YES
>>
>>108623778
No, that's not how it works + you're a larper.
>>
>>108623336
The secret sauce is creating the perfect set of values for the Y axis to make everything look good but be completely meaningless.
>>
>>108623254
get back to work yann
>>
>>108623335
>>108623336
I just want a table with text
not this unreadable trash
>>
>>108623828
then you tell me
that is how it went when i did a memory swap on nintendo switch or nand swap on mba
>>
>>108623793
https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
It's a measure of how different two probability distributions are
>>
>>108623421
I'll add it to my TODO.
>>108623583
I'm testing it on a python 3.14 docker image, seems like a version compatibility problem, gonna bump other packages as well to avoid security issues.
>>
>>108623751
I don’t see a reason to get more than my single used 3090 anymore considering that 30b is basically the new 70b and better.
>>
>>108623884
I would appreciate being able to fit full context length. My 3090 can only load gemma 31b at q4 + meh context size. Frustrating because I really only would need a few extra gb to get way more ctx in
>>
>>108623884
Running 31B at Q8 with usable speed?
>>
Ok, Orb's pretty cool but kinda slow. We need dflash like 5 minutes ago. Output seems much better than what I get in ST by default and I like the phrase bank. Also it caught and replaced a "not x, but y" sloppa.
>>
>>108623913
Orb?
>>
>qrd bait again
>>
>>108623913
It's as slow as your hardware bozo
>>
>>108623913
Do a git pull bro it doesn't look like that anymore. The previous diff preview was basically unreadable.
>>
>>108623919
https://gitlab.com/chi7520115/orb
>>
>>108623913
And also turn off reasoning for writer and editor, not needed and only makes things slower.
>>
>>108623927
When did it update? I did a pull yesterday.
>>
>>108623943
Just now (I'm the dev, I just pushed).
>>
>>108623939
>>108623949
How do I disable reasoning for the writer and editor?
>>
>>108623913
Fucking retarded project. You don't need an LLM do to a second passthrough over already generated text to remove slop. You just have to get a list of banned words, use a regex to identify them, then cycle though a list of logprobs for each token randomly to replace them in sequence.
>>
File: file.png (14 KB)
14 KB
14 KB PNG
>>108623959
switches are under the... orbs.
>>
>>108623962
>t. shittytavern dev
>>
>>108623959
Pic related.
>>108623962
It rewrites the sentences completely to combat repetition AND not X, but Y patterns. It's not just slop words.
>>
>>108623962
okay, where's your frontend? surely you made one, since it's so easy?
>>
>>108623974
I have actually. It's not creative writing focused though.
>>
>>108623962
>Break the semantics wth his dumb regex
Great solution genius
>>
>>108623729
TTS is awful and it doesn't sound natural.
>>108623731
Where do I get this? Where do I start?
>>
>>108623962
That sounds even more retarded. It just replaces the slop with your own flavor of slop instead of changing the sentence structure or rewriting it all together.
Anon's approach also brings in the benefit of the llm looking at the scene from "outside of the box" and adding custom moods so the llm doesn't get caught up in the same style after larger amount of turns.

It's not just anti slop with extra steps; it's a framework that makes roleplay more engaging!
>>
>>108623336
>Size on disk
Compare per quant :)
>>
>>108623995
Can't tell if Bateman or slop.
>>
>>108623995
Damn you had me until the last line.
>>
>>108624005
>>108624013
my apologies sirs, i should've ended with /s and /j for good measure to make sure everyone gets it
>>
>>108623979
You should share it. The more the merrier DESU
>>
What do you even use your local slop for
>>
>>108624071
translation
>>
>>108624071
freedom
>>
>>108624084
>>108624084
>>108624084
>>
>>108624071
pedo ERP
>>
>>108624071
these >>108624080 >>108624082 >>108624099

Reply to Thread #108619962


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)