Thread #108563476
HomeIndexCatalogAll ThreadsNew ThreadReply
H
Discussion and Development of Local Image and Video Models

Previous: >>108558395

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/ostris/ai-toolkit
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/musubi-tuner
https://github.com/tdrussell/diffusion-pipe

>Z
https://huggingface.co/Tongyi-MAI/Z-Image
https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

>Anima
https://huggingface.co/circlestone-labs/Anima
https://tagexplorer.github.io/

>Qwen
https://huggingface.co/collections/Qwen/qwen-image

>Klein
https://huggingface.co/collections/black-forest-labs/flux2

>LTX-2
https://huggingface.co/Lightricks/LTX-2

>Wan
https://github.com/Wan-Video/Wan2.2

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
https://rentry.org/mvu52t46

>Illustrious
https://rentry.org/comfyui_guide_1girl

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Collage: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/r/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
https://rentry.org/animanon
+Showing all 312 replies.
>>
>mfw Resource news

04/09/2026

>MAR-GRPO: Stabilized GRPO for AR-diffusion Hybrid Image Generation
https://github.com/AMAP-ML/mar-grpo

>HybridScorer: Score, sort, and cut large sets down fast with GPU-accelerated AI review
https://github.com/vangel76/HybridScorer

04/08/2026

>OrthoFuse: Training-free Riemannian Fusion of Orthogonal Style-Concept Adapters for Diffusion Models
https://github.com/ControlGenAI/OrthoFuse

>MIRAGE: Benchmarking and Aligning Multi-Instance Image Editing
https://github.com/ZiqianLiu666/MIRAGE

>Few-Shot Semantic Segmentation Meets SAM3
https://github.com/WongKinYiu/FSS-SAM3

>PoM: A Linear-Time Replacement for Attention with the Polynomial Mixer
https://github.com/davidpicard/pom

>RS Nodes for ComfyUI: Cmprehensive custom node pack focused on LTXV audio-video generation, LoRA training and post-processing
https://github.com/richservo/rs-nodes

>FLUX.2 Small Decoder: Distilled VAE decoder for faster decoding and lower VRAM usage
https://huggingface.co/black-forest-labs/FLUX.2-small-decoder

>Nvidia snaps up AI chip packaging capacity as TSMC expands in U.S.
https://www.cnbc.com/2026/04/08/tsmc-nvidia-advanced-packaging-intel.html

04/07/2026

>Anima preview3 released
https://huggingface.co/circlestone-labs/Anima#preview3

>FrameFusion Image Interpolation: Compact image interpolation model for generating in-between frames
https://github.com/BurguerJohn/FrameFusion-Model

>An Inside Look at OpenAI and Anthropic’s Finances Ahead of Their IPOs
https://www.wsj.com/tech/ai/openai-anthropic-ipo-finances-04b3cfb9

>PrismML debuts energy-sipping 1-bit LLM in bid to free AI from the cloud
https://www.theregister.com/2026/04/04/prismml_1bit_llm

>ComfyUI Hires Fix Ultra - All in One
https://github.com/ThetaCursed/ComfyUI-HiresFix-Ultra-AllInOne

>ATSS: Detecting AI-Generated Videos via Anomalous Temporal Self-Similarity
https://github.com/hwang-cs-ime/ATSS
>>
>mfw Research news

04/08/2026

>GenLCA: 3D Diffusion for Full-Body Avatars from In-the-Wild Videos
https://onethousandwu.com/GenLCA-Page

>Grounded Forcing: Bridging Time-Independent Semantics and Proximal Dynamics in Autoregressive Video Synthesis
https://arxiv.org/abs/2604.06939

>Evolution of Video Generative Foundations
https://arxiv.org/abs/2604.06339

>VersaVogue: Visual Expert Orchestration and Preference Alignment for Unified Fashion Synthesis
https://arxiv.org/abs/2604.07210

>Controllable Generative Video Compression
https://arxiv.org/abs/2604.06655

>Not all tokens contribute equally to diffusion learning
https://arxiv.org/abs/2604.07026

>FlowInOne:Unifying Multimodal Generation as Image-in, Image-out Flow Matching
https://arxiv.org/abs/2604.06757

>Holistic Optimal Label Selection for Robust Prompt Learning under Partial Labels
https://arxiv.org/abs/2604.06614

>Towards Robust Content Watermarking Against Removal and Forgery Attacks
https://arxiv.org/abs/2604.06662

>PhyEdit: Towards Real-World Object Manipulation via Physically-Grounded Image Editing
https://arxiv.org/abs/2604.07230

>Noise Constrained Diffusion (NC-Diffusion) Framework for High Fidelity Image Compression
https://arxiv.org/abs/2604.06568

>RefineAnything: Multimodal Region-Specific Refinement for Perfect Local Details
https://limuloo.github.io/RefineAnything

>Visual prompting reimagined: The power of the Activation Prompts
https://arxiv.org/abs/2604.06440

>MoRight: Motion Control Done Right
https://research.nvidia.com/labs/sil/projects/moright

>Fast-dVLM: Efficient Block-Diffusion VLM via Direct Conversion from Autoregressive VLM
https://arxiv.org/abs/2604.06832

>DesigNet: Learning to Draw Vector Graphics as Designers Do
https://arxiv.org/abs/2604.06494

>FP4 Explore, BF16 Train: Diffusion Reinforcement Learning via Efficient Rollout Scaling
https://arxiv.org/abs/2604.06916

>When to Call an Apple Red: Humans Follow Introspective Rules, VLMs Don't
https://arxiv.org/abs/2604.06422
>>
>>108562783
>even a ComfyUi employee fell for it
lol, lmao even
>>
>>108563499
i don't get it
>>
File: file.png (495.1 KB)
495.1 KB
495.1 KB PNG
>>108563499
now I understand why it's called HappyHorse, they arr all the horse face kek
>>
>>108563499
>literal who
>>
>>108563514
So that's the power of API?
>>
>>108563505
>new model under the pseudonym 'happyhorse' gets teased on arenas, beats top API model seedance 2
>news spreads of this new model, people wondering if it's the new google VEO, others speculating it's china because of the name
>jeet vibe-codes a generic pop-up throwaway website for 'happy-horse ai' claiming it is SOTA, 15b parameters, and will be locally released
>the exact same grift that happened with the 'mogao' model that turned out to be seedream api
>localbrowns itt fall for it >>108555676
>it spreads, chinese anime man falls for it (picrel)
>gets reposted to reddit, redditoids fall for it
>kijai shuts him down and calls it fake, which it obviously is
>doesn't matter, news spreads and now everyone thinks a SOTA video model will be released locally within 48 hours
the backlash will be funny when it releases as API-only and the comments get flooded with outrage, even though whatever company behind it never even claimed it would be local. though it's still deserved, as every model should be local (even if it's fun laughing at localkeks)
>>
File: this.png (256.8 KB)
256.8 KB
256.8 KB PNG
>>108563557
>the backlash will be funny when it releases as API-only and the comments get flooded with outrage
based, at least it had the effect to create some hate to an API model, can't wait to see the comments
>>
>>108563567
it's kind of sad to see so many people get hyped for nothing, when the reality is we won't receive a model like this for at least another full year
>>
>>108563573
>we won't receive a model like this for at least another full year
bold to assume we'll get something better ever, it's obvious now that Alibaba has abandonned us, do you even realize that we've been waiting for Z-image edit for more than 4 months now? Pretty fair to say this is all over, chinese culture won
>>
whats with all these xitter screenshots and old memes
>>
>>108563591
none of my images made it in the fagollage too. sad! :(
>>
>>108563573
>it's an open model so the benchmarks are magically true!!
kek
>>
>>108563591
we have a new model we have to fud
it isn't released yet so we're trying to get ahead of it
>>
>>108563594
how does that relate to what was posted
>>
>>108563602
thread being cringe :)
>>
>>108563601
>we have a new model we have to fud
I'm ok with fudding closed source models desu
>>
>>108563605
you seem upset
>>
>muh fagollage
lmao
>>
>>108563557
the model looks like complete slop, i don't think anyone will care one way or the other.
i guess it might be ok if it is an API model and it can hold character likeness properly, bytedance kind of killed seedance 2.0 over that shit.
>>
>>108563617
>i don't think anyone will care one way or the other.
if it was local it would've been hyped though, it's way better than LTX 2.3 and Wan 2.2
>>
>dont care about sneederboards
>never tricked into getting hyped for nothing
Join me anon
>>
>>108563557
Source that it's fake?
>>
>>108563640
KJGod said it
>>
File: FAKE.png (98.8 KB)
98.8 KB
98.8 KB PNG
>>108563640
>>108563651
https://github.com/brooks376/Happy-Horse-1.0/issues
kek
>>
>>108563655
https://github.com/brooks376/Happy-Horse-1.0/issues/3#issue-4225521889
>The author is using a deceptive title and README to exploit the open-source community's trust.
>The open-source community is a place for developers to share and collaborate, not a dumping ground for your vanity metrics or clickbait schemes.
>>
ACEStep 1.5 XL Turbo. I am speechless, these are all first shots.

J-Core/Ballad
https://vocaroo.com/12JLSQwAuKIH

Electronic, Hatsune Miku included in prompt-
https://vocaroo.com/1l5RndzPCRbL

Country-
https://vocaroo.com/104rQ4A0Ux62

Gabber-
https://vocaroo.com/1b9C9ss8CTh9

Prompts are all enhanced with Gemini's help. Lyric alignment tends to be perfect now. We're now at Udio/Suno v5 territory
>>
File: __00037_.png (1.6 MB)
1.6 MB
1.6 MB PNG
>>
>>108563674
>The open-source community is a place for developers to share and collaborate, not a dumping ground for your vanity metrics or clickbait schemes.
lmaaaao
tell that to cumfart please
>>
>>108563679
>Electronic, Hatsune Miku included in prompt-
it doesn't sound like miku at all, and the sound is still metalic as fuck, why did you use the turbo model though? the sft is supposed to have better quality right?
>>
>>108563655
fingers crossed it's just an api model.
an open model on par with seedance would be the death of /ldg/
>>
>Alibaba
>Trusting that this lab will release anything SOTA since not releasing Qwen Image 2 despite releasing its parameter count and top researchers from the team departing shortly after the CEO claimed that he is not happy about the state of open source

Unless BFL steps in and gives us a good video model, they have no reason to give us one themselves. There's no competition and no reason to release.
>>
>>108563693
>an open model on par with seedance would be the death of /ldg/
nothing is close to seedance, I've seen some videos from HappyHorse they are mid as fuck
https://youtu.be/mmk9C6bkV_c?t=161
>>
>>108563640
>saaars that is fake???????
this is the website. if you believe this is legit, you're brown: https://happyhorse-ai.com
the github repo with the 'source code' is full of made up bullshit as well. the most obvious tell is that they call it 'happy horse' when artificialanalysis lets companies use code-names for models (mogao = seedream, blueberry/strawberry were flux-2). they admit directly it's a pseudonym:
>We’ve added a new pseudonymous video model to our Text to Video and Image to Video Arenas.‘HappyHorse-1.0’
this is so that if the model turns out to be dogshit, the company behind it can just silently pull it without the whole world knowing that openai/grok's latest api model is a complete flop. saars are using this to their advantage to make fake websites surrounding these codenames to get people to enter their login information or pay crypto for 'credits' to use it
>>
>>108563702
seedance today isn't the "hollywood killer" that it was when they first showed it off, they bricked the model with their face detection slop.
>>
>>108563688
Definitely sounds like her, at least in style, the faded sounds in that particular song were prompted for (glitchy synths was included in many parts, so it glitched out her voice). That is roughly what you'd get out of a cloud model like Suno after prompting for her, I'd imagine it gets better with LoRAs.

>why did you use the turbo model though? the sft is supposed to have better quality right?

I always test Turbo first especially since on S it had better creativity, I'll try SFT/Base later. So far what this model has outputted puts it way above S SFT though.
>>
>>108563729
obviously, hollywood threatened them with lawsuits, the """based""" chinks bent the knee to the jews
>>
>>108563739
>this model has outputted puts it way above S SFT though.
>I'll try SFT/Base later.
how do you know that if you haven't tested SFT yet? what?
>>
>>108563750
local diffusion?
>>
>>
>>108563476
Is there anything that rivals nano-banana or similar in terms of being able to just describe what you want to see, rather than a whole "prompt engineer" slopfest? I want to connect SillyTavern to ComfyUI and gemma 4 likes to generate intricate descriptions of settings
>>
File: LMAOOOOO.gif (2.8 MB)
2.8 MB
2.8 MB GIF
>>108563769
>Is there anything that rivals nano-banana or similar in terms of being able to just describe what you want to see, rather than a whole "prompt engineer" slopfest?
IS THIS NIGGA SERIOUS??
>>
>>108563769
no, not anything close really
>>108563779
yeah it's kinda grim
>>
>>108563779
>use the word banana
>a nigger chimps out at me
I don't know what I was expecting
>>
>>108563769
It is common to use an LLM between yourself and an image model, yes.
>>
>>108563769
Nano Banana isn't even the best image model anymore, I think GPT-Image 2 dethroned it
https://xcancel.com/WolfRiccardo/status/2041573176681918728
>>
local diffusion?
>>
>>108563743
I mean regular SFT, not the new XL.
>>
so close
https://huggingface.co/SG161222/SPARK.Chroma_v1
>>
>>108563798
do you get paid to shill chatgpt here? you've been spamming this slop for days.
>>
>>108563798
oh cool. How do I run the publicly released weights of GPT-Image 2? Is a p40 good enough or do I need two?
>>
>>108563807
The v1 was done two weeks ago. Can anyone make a q8 goof?
>>
>>108563769
Qwen Image 2, but it's closed and only has been rumored to open (doubt it). Joy AI Image Edit seems like a viable alternative to Qwen, it in fact is better at prompt following than everything else we have, but Comfy hasn't added support for it yet.
>>
>>108563809
>>108563814
look at those localkeks seething and jealous that we get quality toys while all you can have is cucked plastic slop keek
>>
>>108563814
ComfyUI has Partner Nodes you can check out, but it's not added there yet. Stay tuned, just check the OP
>>
>>108563818
>only has been rumored to open (doubt it)
that chink insider used to be reliable, now he can't stop missing lol >>108563557
>>
Very organic
>>
>>108563823
For some reason I'm willing to pay thousands to setup a 6x 3090 rig with 512 GB of DDR4 (before it went nuts) but it feels like a waste of money to give ((cloud providers)) $0.01 per image.
>>
>>108563831
the thing is that being rich is useless on local diffusion models, we don't have a giant local model that would rival the best API models, it doesn't exist
>>
>>108563819
>quality toys
larping as a white person driving a car is peak entertainment to an indian.
>>
>>108563849
I'm not the one who made that image, I'm showing you the realism and you focus on something completly unrealted like the color of the skin, remind me of something...
>>
>>108563847
I have hope. It wasn't too long ago I was thirsting over Deep Dream and imagining how cool it'd be to run that. If local is only as good as SOTA was 6 months ago (more like a year for image gen right now) it's worth it just for the freedom and FOSS spirit.
>>
how's the fudding proceeding friends? did we manage to gaslight anyone?
>>
feels like artist tags go to shit when you try to introduce sex tags on anima
>>
>>108563911
A1111 is so based, he should come back and put ComfyAPI to his place
>>
>>108563915
this is true in my experience as well. every tag is a style tag really. the more you add, the further it drifts
>>
>>108563873
the "moon" being a gen of a still frame from a blurry 720p bodycam, not even a video because openai doesn't do that anymore.
>>
>>108563896
they do it for free even
>>
>>108563923
sdwebui had its time (in the corpo I currently work for I actually coded a UI similar to it for internal usage for promo material creation), but I elected to use comfy as the backend because it's just too flexible to work with
>>
>>108564033
this so much. comfyui made by comfyorg is such a well-coded, convenient and flexible user interface, nothing comes even close. i'd pay for it if I could, right fellow customers?
>>
File: kek.png (372.4 KB)
372.4 KB
372.4 KB PNG
>>108564039
>i'd pay for it if I could
are you that broke ani? I thought you had secured some investments in japan? maybe you should come back to comfy and ask him the 1 million grant again, you're talking about him in such a nice way on /ldg/ I'm sure he'll consider giving some of those bits to such a good non-treasonus friend
>>
>>108564039
>frontend
I specifically mentioned I use its backend, fucking retard.
Noodles scare the normal employees since they have cattle level intelligence.
>>
>>108564080
Its cool that there's people in this world where lines are too hard to understand. I love living in this world where that is true
>>
>>108564080
>>108564129
>move your mouse with your feet, now!
>what do you mean you don't want to deal with that because there's more elegant ways to manipulate a mouse, like hands for example...
>just say that you're too retarded to do it
ComfyAPI shills in a nutshell
>>
Hey /ldg/,

I’ve been working on Spellcaster, an open-source plugin that seamlessly integrates 30+ AI tools directly into GIMP and Darktable. It uses ComfyUI as the backend engine, running entirely locally on your own GPU. It is essentially the GIMP version of what you guys are doing/using.

My goal was to bring Photoshop-level AI features to open-source editors, without the steep learning curve, cloud requirements, or subscription fees.

What it does right inside your editor:

Inpaint & Outpaint: Generative fill to change objects or extend your canvas.

Enhance & Fix: 1-click AI upscaling, background removal, face restoration, and object erasure.

Relighting & Video: Change lighting direction on portraits (IC-Light) or turn still layers into short video clips (Wan 2.2).

One-Click Install: The installer handles all the backend complexity (detects GPU, downloads models, sets up ComfyUI, and links to your editor).

https://github.com/laboratoiresonore/spellcaster/blob/main/README.md

I am looking for collaborators / feedback. There are a couple of advanced features that I'd like to implement next:
-full LTX2 support
-parsing any workflow method (at the moment, the script is designed with noobs in mind but comfyui veterans will likely want to use their own workflows and special sauces)
-making the "studio" system as advanced as possible
-clever refractoring and reorganizing the script
-better theming

The script is pretty recent but it's good enough that it is all I personally use, instead of comfy
>>
>>108562337
> >Nobody here talks about ACEStep 1.5 XL which just dropped

> https://ace-step.github.io/ace-step-v1.5.github.io/#XLDemos

> It's a different class of model bros, I'm not hearing any slop...
lol remember when locals have claiming that acestep is at suno/udio level of quality?
what level is now then?
>>
>>108564129
if you give the corpo drone a noodle gui... he's not going to understand, that's just how it is. You'd expect people here to be more tech savvy but even subhuman retards found their way here instead of staying in plebbit sooo whatever.
>>
>>108564146
How does it differ from Krita? If you keep the deeper AI controls like loras and sampler settings on the surface and not buried in menus like krita, it may be worth considering.
>>
>>108564154
>remember when locals have claiming that acestep is at suno/udio level of quality?
ESL gibberish crying about local. it would be funny if it wasn't so goddamn always.
>>
listen up bro, if you don't like comfyorg product you are barely human and don't really deserve to live, alright? comfyorg will shit in our mouths because that's what they like and you must enjoy it goyim
>>
How i can get videos with lipsync ? I seen some around but I do not know which model can generate them ? is a online service only? Which one ?
>>
>>108564146
Looks cool, merci. Any demonstration video?
>>
>>108564247
add lipsync to an existing video? generate videos with audio and lipsync?
>>
>>
>>108563679
For the anon who mentioned instrumentals last thread, here's Spaghetti Western
https://vocaroo.com/1i8vvdmjVnDD

This is a genre the previous version could not do at all, and even its base model struggled with, that's a one shot from Turbo. Only issue I've noticed so far is it speaks some stuff in parentheses out loud in the midst of instrumental, which is not hard to fix in post processing and perhaps SFT does better here.

>>108564154
Well, with LoRAs the previous version absolutely was. However, LoRA training to improve everything is tedious. This one doesn't need LoRAs and has much better musical knowledge out of the box, which is now for the first time competitive with Udio/Suno at just 4B.
>>
>>108564260
1) add lipsync voice generation to an existing video.
2) Generate videos with audio or lipsync (if not possible the 1)
I'm curious about both options, and how they seen in the usual tiktok feed it seems pretty fast to made. Generate with sound is a new thing for me.
>>
we need more comfyui product discussion itt. come on bitch, more engagement
>>
test
>>
>>108564262
>>108563755
Ummmmmmmmmm... prompt?
>>
Grok is much better with lyrics than everything else kek
https://vocaroo.com/1gBbejpLw6s9
>>
>tdrussel
>diffusion pipe: initial commit 2 years ago
But that didn't trigger FUD. It was Anima that put a target on him. If we follow the money, who could feel threatened by Anima but not diffusion-pipe? I think the answer is NovelAI. They're funding the troll farm.
>>
>>108564493
Nah *someone* is just very mad about comfies decision
>>
>>108564310
>Only issue I've noticed so far is it speaks some stuff in parentheses out loud in the midst of instrumental

I see what the issue is kek, it should all be in brackets instead, parentheses are only for whispers and background noises.
>>
>>108564598
that someone knows what comfy truly wants he's known him for a long time after all!
>>
the small gemma 4 models are so ass on vision task, it's a shame they went for a smaller mmproj relative to the 26 and 31b models
>>
ayo
>>108564685
yeah theyre cooked.
even the MOE is shit btw
I went back to qwen3vl
>>
>>108564685
Use gemini 3.1 API instead of kekked localslop
>>
>>108563769
Yeah, Flux Klein.
>>
>>108564727
Klein is not NBP tier with prompts or text because it's not autoregressive. It's very impressive for what it is, and it probably doesn't get any better than what Klein does with prompts for its particular architecture, but it's still not quite there yet.
>>
https://civitai.com/articles/28368/chenkinnoob-xl-v05-is-coming-soon

We are thrilled to announce that ChenkinNoob-XL-V0.5, the direct successor to V0.2, has completed its training phase and will be officially released on April 10th (Beijing Time)!

After months of architectural refactoring and dataset expansion, V0.5 is no longer just a "gacha toy." We have pushed it to industrial-grade productivity standards.

What to Expect in V0.5:

Massive Dataset Leap: Built directly upon V0.2, we have added 2.17 million high-quality, open-source game-related images. The total training dataset now reaches ~12 million images, effortlessly capturing the latest anime art styles and popular characters.

Pro-Level Aesthetics: Built with industrial-grade standards, V0.5 fundamentally eliminates the cheap "AI-generated look," ensuring top-tier composition, lighting, and native anime aesthetics.

A Mysterious Ecosystem Addition: Alongside the V0.5 base model, we will also be releasing a highly capable new model within the ckn ecosystem. What exactly is it? We'll leave that as a surprise for you to guess until release day!

The wait is almost over. Get ready for the next evolution of anime AI generation.

Stay tuned for April 10th!
>>
>>108564801
>sdxl
I sleep
>>
>>108564801
Stop tuning sdxl jesus christ
>>
>>108564801
>SDXL still gets updates in 2026

Just why?
>>
>>108564801
tranimakeks seething. SDXL won.
>>
File: REEEEE.png (52.4 KB)
52.4 KB
52.4 KB PNG
>>108564801
>SDXL
WHY?? We now have Z-image base and Klein 4b, what is wrong with youuu??
>>
File: mogged.jpg (2.3 MB)
2.3 MB
2.3 MB JPG
>>108564685
>>108564690
kek
>>
>>108564829
You unironically don’t understand chinese culture. They don’t want new models, they just want more SDXL slop because its fast and thats where the millions of character loras are.
>>
>>108564862
moral of the story, stick with gemini if you want to caption images lol
>>
>>108564801
both of the god damn promotional images have 6 fingers on one of the hands
>>
what's the lora training sample aesthetic called?
>>
>>108564690
It's not local
>>
>>108564989
>he knows local is too ass to generate such a good image
sad :(
>>
>>108563681
kek'd
>>
>>108564989
>its not local
>>
>>108565108
based, bodied that freak
>>
>>108565108
Oh shit the rtx node now does latent too?
>>
>>108565134
nope, just a subgraph
>>
can you really run gemma4:31b on a 4090?
>>
Is this real?
https://happyhorse.app/
>>
>>108563476
Kik Epp23g
Tele Bgftg33

Can anyone help me get a better result training a gf Lora?
>>
>>108565177
no, they're all fakes, HappyHorse is a codename anon, it won't be the real name after the reveal
>>
>>
qrd on anima v3?
>>
>>
>>108563514
>4changs
>>
File: o_00053_.png (1.4 MB)
1.4 MB
1.4 MB PNG
>>
>>108565373
Quite good i enjoy it
>>
>>108565394
but is it cute and funny yet?
>>
>>108565148
q4 is nearly indistinguishable from q8 and runs even on a 3090
>>
>I decide to take a look at what StabilityAI is doing in the year of ourd 2026
https://xcancel.com/StabilityAI/status/2021322296707908034#m
>safety, safety safety
I see that you never changed, after all those years
>>
>>108565426
>q4 is nearly indistinguishable from q8

Holy cope.
>>
File: lmao.jpg (2 MB)
2 MB
2 MB JPG
>>108565426
>q4 is nearly indistinguishable from q8
https://www.youtube.com/watch?v=H47ow4_Cmk0
>>
>>108565497
>smugly posts an image model quant example when a text model quant is being discussed
i wish i was as cringeproof as you are, nonny
>>
>>108565497
>comparing a 30b dense to some flux-sized image model
baka?
>>
>>108565503
try to use a Q4 text encoder and see how it goes keek
>>
>>108565526
Bigger models get hurt less with quanting. Q4 30B is absolutely perfectly fine.
>>
>>108565545
it also depends on how much training the model got, if it's undertained, quants won't hurt much, but if it's a bit overtrained like gemma, every weight count so...
>>
>>108565545
please, your dosage of copium is dangerous
>>
q1 is possible, bonsai showed us so shut the fuck up about your precious '''lossless''' q8
lol
>>
>>108565710
retard vramlet, you never quanted a model in your life, shut your fucking cakehole about things you don't know about.
>>
>tfw reze lost
>>
I don't browse these threads much but I have a question on hardware.

I currently have an ubuntu system I just run for shit projects I do, can I just slap my 3090 on it and would it work right away with local LLMs because I've heard nvidia drivers are a pain on linux?

It's currently on my main windows PC right now but I can't multitask whenever I have a model loaded on it.
>>
>>108565229
well...we still call it nano banana though it was a codename
>>
>>108565806
>I just slap my 3090
I mean if it isn't a complete potato like 8g ddr3 system, yes.
>local LLMs
Not the llm thread but sure.
>I've heard nvidia drivers are a pain on linux?
They werk fine for the most part, it's just bunch of cultists seething.
Plus nvidia is a lot better for AI irrespective of your OS.
>>
>>108565826
>Not the llm thread but sure.
That was pretty fucking stupid of me but yeah I could use it for local diffusion too.

It's an old system with a Ryzen 2600 but it is ddr4 at least.
>>
>>108564801
What's the fucking point of waiting a day to release / announcing a day before?

>>108565785
You in the nice-girl thread too?
>>
>>108565873
of course
>>
>>108565912
Brofist, I posted that pic. Once in a while I still feel the magic of the internet.
>>
>>108565912
looks like some SDXL tune given the eyes and fingers. is it?
>>
File: o_00059_.png (1.2 MB)
1.2 MB
1.2 MB PNG
>>
>>108566124
>>108565108
>>
File: o_00060_.png (1.8 MB)
1.8 MB
1.8 MB PNG
>>
>>108565108
What's in optimize node anon?
>>
>>108563679
This time around, I do not see advantages to using SFT. The gap between it and Turbo is much lower than on previous version, and I do not notice too much difference in sound quality, plus I think Turbo being more creative still stands.

This seems to be general consensus on Discord as well, everyone is using Turbo, though I'm admittedly a bit worse at prompting SFT and tuning its settings so maybe it's just bias (this is default settings on qinglong UI with steps changed to 50).

Here are some samples from XL SFT, these are prompted with Gemini's help and a tiny change will make something sound 20x better so take results with grain of salt

Keygen music- https://vocaroo.com/11B2ndXidclH
Very sensitive on that one and made everything really fast paced, will prob. need LoRA for that, but chiptunes sound more authentic

Denpa/hyperpop with romaji lyrics
https://vocaroo.com/19gaTvuK3VQg

Eurobeat
https://vocaroo.com/1mF8vI2ppaK6
>>
>>108566261
how good is it at actually singing all the lyrics in the prompt? the previous version of acestep pretty much never got them all
>>
>>108564690
These images look so great on mobile then you look at them with a monitor and you can see all its flaws
>>
>>108565497
isnt this image like 3 years old by now? image/videos gguf models got deprecated a long time ago
>>
>>108566325
>how good is it at actually singing all the lyrics in the prompt?

XL Turbo gets it right almost every seed, same with SFT. Not perfect, but both have extremely high pass rates, so much it's not a concern anymore.
>>
>>108566351
>image/videos gguf models got deprecated a long time ago
??? What were they replaced by???
>>
>>108566352
Though, as before, what you put in the duration matters. Small duration/slow bpm but too long lyrics can speed up lyrics or increase errors, but generally much more forgiving and seems to adapt really well even if you mess up duration.
>>
>>108566356
fp8scaled, mixed models, the whole purpose of gguf models was to save vram not enhance quality, since vram management and speed has improved by a ton there is no incentive on using gguf models anymore, they are slower to load and clunkier to run, especially on video models
>>
File: o_00062_.png (1.6 MB)
1.6 MB
1.6 MB PNG
>>
>vram management and speed has improved by a ton
>>
this is two days old but I just saw it

>Over 1,000 Exposed ComfyUI Instances Targeted in Cryptomining Botnet Campaign
https://thehackernews.com/2026/04/over-1000-exposed-comfyui-instances.html
>>
>She is being embraced from behind by a large, muscular man in plate armor with his head mare
Was supposed to write bare.
>>
>>108566455
>internet-exposed instances running ComfyUI
couldn't be me
>>
>>108566455
shodan moment
>>
Anima 3 is incredible at following artist styles.
>>
>>108566455
api wins again
>>
>>108566455
Fuck off fag
>>
>local getting SOTA video and anime image model tomorrow
I know it's early but fuck it. Local Victory Celebration thread. Can't wait to create masterpieces with Chenkin and then truly bring them to life.
>>
Who's even making the rumored video model? Alibaba? Is that WAN team?
>>
>>108566619
Dumb whores don't even know how to eat a carrot properly
>>
>>108566703
someone working with kling supposedly
>>
>>108566689
>>108566703
we're getting another video model besides LTX and Wan? i thought that was exposed as a fake a few generals ago due to the github looking sketchy.
>>
>>108564154
>lol remember when locals have claiming that acestep is at suno/udio level of quality?
i dont know why you pretend like that was more than a single poster
>>
Open weights 15b miracle model that beats API thingy was Chinese guy getting baited by some fake website.
We ain't getting jackshit.
>>
>>108566723
Is there any actual source on anything? the chink nigga doesn't count.
>>
>>108566775
a tweet from another chink nigga that said its from the alibaba taotian group, with no source of course
>>
>>108566351
>image/videos gguf models got deprecated a long time ago
I don't know in which universe you're living in, but it's definitely not this one
>>
>>108566775
the model is real, we know that much. everything else is hype, guerilla marketing and jeet lies.
alibaba probably want a do over after wan 2.7
>>
sometimes I run models in fp8 in solidarity with the proles but otherwise it's full precision ONLY
>>
>>108566757
nothing ever happens, we're stuck with Z-image turbo and Wan 2.2 as the best models until the end of time
>>
>>108566792
>with no source of course
there used to be a chink site (in which he based shit on) but it got removed lol
>>
its already friday in china
release the model already
>>
>>108566873
It's Chinese New Year 2
>>
>>108566741
Thinking it's not now is just delusional though. One thing is maybe Udio has more catchy songs out the box, because they did insane RLHF, but obviously an ACE Step LoRA or good prompt/seed surpasses that. One thing I have already noticed consistently from testing XL is lyric alignment surpasses the Udio 1.0/1.5 models.
>>
>>
https://huggingface.co/happyhorseai/happyhorse-ai-video-generator
lmao
>>
Heads up, looks like acestep.cpp might be the meta for running XL for VRAMlets. Haven't tried it, but others are having success.
>>
>>108566971
cpp is a lot better than all the python shit yes
>>
File: o_00068_.png (1.3 MB)
1.3 MB
1.3 MB PNG
>>
I'm attempting to generate multiple different prompts in one run and that all of the interacting characters are in the same prompt, there's a ton of bleeding.

How can I set this up so that it works like the BREAK prompt in forge, but in comfy?
>>
>>108567199
regional conditioning
>>
File: 4687474.png (465 KB)
465 KB
465 KB PNG
Holy based, bluvoll made two anima finetunes in one day, 200% more than any anima defender shills here.
So you can see, tdrusell, /ldg/ is a sham, stop visiting it and check out the anime generals instead, it’s basically an empty shilling general here, no value, no real users of Anima.
>>
>lora merges
>>
>>108567250
>it’s basically an empty shilling general here, no value
for a "no value" general you sure love to lurk here, really makes you think
>>
>at least he does something,he tries, you don’t, you pretend to be interested in anime in Anima, you simulate caring to generate fake bumps here, in your hollow void of a general
>>
minor fizzle-out
>>
>>108564233
keep seething
>>
>>108567281
Until Tdrusell posts in real anime generals, because Anima was made for anime, I’ll stay here exposing how fake you all are, how you don’t care about anime aesthtetics or Anima, and only use it as a pretext to brag that important people show up here.
Last message for today, I have things to do.
>>
>>108567318
Sounds like you're mentally ill
And trying to police in which generals anon posts makes you a lolcow
>>
>last message for five minutes
>>
my god why cant ran just discuss local diffusion instead of pivoting every discussion towards dumb drama
i'm fucking exhausted from this behavior
>>
how does it feel to be the social group whose interests align most closely with furries?
>>
>>108567336
you're a pedophile fuck off
>>
>>108567318
Seems like /ldg/ makes you quite butthurt anon
My advice for you: visit the nearest forest and just scream for an hour. Let it all out. You will feel better!
>>
>>108567336
cosmic fluke, personally I dont' see any connection between anthropomorphic animals and absorbent underwear
>>
>>108567001
ani please
>>
>>108567350
"ani" is in your walls
>>
>>108567350
i'm not ani but it's true the memory management in python is really terrible
>>
>>108567364
this
>>
>>108567364
based anon telling nocoders the truth
>>
>>108567221
Tried alternatives to it, but none works. In order for my multiple prompt workflow to work, it needs to be in a single prompt in a single line, a BREAK in that prompt would save it.
>>
>>108567364
the fact that so much of this is built on python is astonishing, to be fair it makes sense in a lot of cases, but still. python..
>>
>>108567389
then controlnet maybe?
>>
Here we go again
>>
>>108567364
true, but no one uses python memory for ai
>>
literally just don't update and it's fine
>>
>>108567395
only makes sense if you don't care for local at all
using wasteful languages like python makes it easier for the dev, but the consumer needs a lot more hardware and everything takes unreasonably longer than needed
only makes sense if you want to sell something really
>>
>>
>>
>>108567323
Catjack, you have to understand that anime is like a religion for some weebs, they take it way more seriously than your shallow and cheap view about diffusion. While you were relaxing all this time not genning, there are weebs who keep genning, perfecting their artist tags, loras, etc. making anime diffusion things.
>>
>>108567461
Sounds like a bunch of losers especially if they have nothing else to do than seethe at everyone else
>>
>>
>>108566457
Everything came out better than expected
>>
>>
>download anime model
>use it exclusively to gen realismslop
why
>>
>>
>>108567631
Fuck off fag
>>
How do you do gens of niche stuff without a LORA? For example an OC that only has one or 2 images.
>>
>>
>>108567660
img2img with denoise 0
>>
>>108567666
is this real?
>>
>>
i tested anima. not bad at all, and hot bodies already added. oh yeah
>>
File: rename.png (147.7 KB)
147.7 KB
147.7 KB PNG
order is important anons. without order, everything goes to shit.
>>
File: lmao.png (662.2 KB)
662.2 KB
662.2 KB PNG
babe wake up, another mid image model got released
https://huggingface.co/CSU-JPG/FlowInOne
>>
>>
>>108565108
why the dual samplers with an upscaler inbetween? what is behind that 'RTX latent upscale' subgraph? vae decode > rtx upscale > vae encode? lol
>>
>>108568229
>what is hires fix
>>
>>
>>108568240
a hires fix could be two things: latent upscale > 2nd sampler or an upscale with a model > resize > 2nd sampler. was just curious, w/e
>>
ok i changed my mind preview3 might not be total shit
>>
https://civitai.com/articles/28369
>civitai.red becomes the freedom-first front door (what civitai.com is today). All NSFW content, crypto payments. Fully invested in.
the fuck is this?
>>
File: shoot.jpg (723 KB)
723 KB
723 KB JPG
>>108568473
Is pretty much explained there. Just read.
>>
>>108568473
when you go on civitai.red you get this shit lool
>>
>>108568473
they're rebranding to cope yet again after getting raped by payment processors and will continue to increasingly moderate characters/concepts that they don't deem 'appropriate'. it's just as censored as always. meanwhile NovelAI is uncensored and doesn't have to cope like this. local remains the censored kek option without any celebs or nono-concepts while API allows complete freedom
>>
>>108568575
>local remains the censored kek option without any celebs or nono-concepts while API allows complete freedom
unironically this
>>
>>108568229
Dual sampling is the way to go.

I use Z-Image -> Latent Upscale (point resize) -> Z-Image Turbo -> RTX Upscale (and then I downscale before posting).
>>
>>108568575
>NovelAI is uncensored and doesn't have to cope like this.
seriously though how do they manage to get away with it, look at OpenAI they're getting an investigation from the government
https://xcancel.com/AGJamesUthmeier/status/2042258048115265541
>>
>>108568473
"freedom first" (laughter in the background), BUT NO CELEBRITIES
"all NSFW content" - except the shit we purge(d), and of course NO CELEBRITIES
buch of lying wankers.
>>
>>108568606
>buch of lying wankers.
this, fuck civitai
>>
>on site generator = local
???is anon retard???
>>
>>108568575
and seedance 2.0 is now available for every country (including the US) now, local lost
https://xcancel.com/nickvnturi/status/2042345384299892828#m
https://files.catbox.moe/3csh8s.mp4
>>
wan horse today?
>>
>>108568632
>#m
i can see why you like it.
and the reality of seedance
https://www.reddit.com/r/Seedance_AI/comments/1sgh0az/how_to_bypass_seedance_20_face_detection_method_2/
>>
>>108568645
>API doesn't want to do that
>local can't do that
grim
>>
>>108568632
>the 2D animation quality
>2D
It never ceases to amaze me how retarded the average netizen is. No wonder the datasets are slopped up with guys like this clouding the signal
>>
>>108568656
>local can't do image to video
>>
>>108568645
>the reality of seedance
https://xcancel.com/JSFILMZ0412/status/2042347708250292333#m
>Topview the only one that:
>lets you upload faces without any hacks
YOU LOST LOCALKEK
>>
>>108568593
I find it tricky to dial in z-image turbo, the upscaling part. I made a tiled upscale workflow with the TTP nodes, it works I guess. that rtx upscaler is fast as fuck tho, damn
>>
>>108568672
the only local video model that can do image2video with sound is ltx 2.3 and this shit is simply a plastic skin generator that can't keep the face consistent, but go on king
>>
>>
so does anon just repost cloud gens and "troll" local here every day or
>>
>>108568675
>posts a tweet complaining about how cucked the seedance api is
too shay.
>>
>>108568637
a happy horse just flew over my house
>>
>>108568637
it was fake >>108563651 :( (seriously though, they made a bullshit video model that was wan 2.7 and people believe they actually made something better and will open source it??)
>>
>>108568685
we've had anons sneak api gens in here for ages
>>
>>108568685
it's fun to see the localkek seethe and cope about it, they don't want to hear that their toy is considered deprecated in the world of big boys
>>
>>108568685
Yes and since the cloud threads are dead (I wonder why) no one cares to troll them
>>
>>108568593
>RTX Upscale
Completely forgot about this
>>
>>108568771
oh... that must be why he never leaves, if cloud threads are dead then no wonder
>>
>>108568682
>that rtx upscaler is fast as fuck tho, damn
Yeah, it's basically free. Quality-wise, I probably wouldn't keep the final output, but for supersampling back down it's nice to have the resolution available (3072x4096 in my case).
>>
>>108568766
large models running on corpo hardware are superior, when was that ever up for debate. but my setup can make cute latinas showing me THE GOODS. 拉丁裔女性 !!
>>
>>
>>108565108
Hardware?
>>
>>108568799
i don't think there are any saas threads left.
>>
>>108568882
dalle3 is still here
>>
>>108568886
oh yeah, i completely forgot that existed.
>>
>>108568815
>>108568593
based jenner, do you think a machine with a 3090 24gb of vram + 64gb ram could generate jennies of this magnitude?
>>
>>108568882
SaaS doesn't need a thread because SaaS outputs are now naturally integrated into the cultural zeitgeist. Dall-E 3 generated characters are now featured in Fortnite https://fortnite.fandom.com/wiki/Tung_Tung_Tung_Sahur
API-animated series 'fruit love island' became the world's fastest growing TikTok channel, reaching over 3 million followers in 9 days. Superbowl ads are being animated with API video models, Elevenlabs is being used in Hollywood productions. SaaS is being used for actual revenue-generating work and doesn't need a dedicated tinkertranny helpdesk because SaaS models actually output what you ask without needing to fiddle with 50 knobs (and still get lackluster results)
>>
kekd
>>
File: kek.png (107 KB)
107 KB
107 KB PNG
>>108568943
AIEEE DON'T RELEVATE THAT WE HAVE 0 RELEVANCE IN THE ZEITGEIST
>>
Least brown cloud posts
>>
>>108568943
what a load of copy-pasted wank. that clean and sanitized corporate zeitgeist (laughter in the background) can go fuck itself.
>>
>>108568992
>local is not clean it can do subversive shit
yeah? like what? show me examples
>>
>>108568925
Sure. I'm on the same 24/64 memory config. I waited too long on a $440 128GB memory kit because I was saving for a Threadripper build. Now that same kit is like $2k... if you can find it in stock.

Anyway, my workflow spits out an image in ~38s (and ~45s before caching).
>>
>>108568992
Localluddites seething.
>>
>still no Z Image edit
>>
>>
File: file.jpg (119.4 KB)
119.4 KB
119.4 KB JPG
>>108568473
So now they don't have payment processors to work with anymore?
>>
>>108569190
pretty astra!! can i have a catbox please?
>>
>>108569190
I think she sneaked a gang sign in there, sorry. vatos locos forever, ese
>>
>>
>>108568601
Looks like Grok will face more censorships and might even lose sovl as the last uncensored AI chatbot.
>>
>>108568766
Video? Sure, but not for long. API has never caught up to image in raw creative capability since Chroma. Funnily, Dalle 3 was closest thing to Chroma, but then censored. New GPT Image may be able to sneak a few things here and there, but still lacks some sovl from Dalle. API starts backwards, it starts ahead then regresses. Local starts behind, but progresses.
>>
>>108569456
>Video? Sure, but not for long.
I hope you're right anon
>>
Fresh when ready

>>108569503
>>108569503
>>108569503
>>
>>108569470
Remember anon, SaaS always regresses. Sora 2 and Seedream 2 quickly became a shell of what they used to be due to censorship. Sure, technically nothing local matches them yet. But something will soon that can do more like NSFW, and can be finetuned to do more based on edge cases, which is where local shines.

On the music gen side of things, Udio quickly removed its ability to be useful as a tool for musicians who are looking to do remixes, covers etc... after they took ownership of every song its users made. Suno could face similar levels of censorship, with that possibility hanging over their head, ACEStep is leading the way with XL (which now has covers very close to Suno's quality).

There's never any guarantee that one will own API made assets, it's just impossible.
>>
>>108568186
>instead of just typing a prompt, you can now make a shitty jpeg of your prompt to make slopped SD1.5 gens!
Why? I mean, I guess being able to circle the designated spot to place objects or drawing an arrow to point at something would be helpful but why did they train it with text overlaid on the input image?
>>
Is there any way to regional prompt in comfyui that isn't completely fucking insane?, i've been at it for 2 hours now and i get dogshit and it doesn't work, i tried Invokeai but the "regional prompting" there is more like regional suggesting, not nearly as good as forge or a1111 but those 2 are slow and clunky, do i have to pick the lesser evil? thoughts?
>>
>>108569776
forge neo anon
>>
>>108569811
May the gods watch over you, friend.

Apparently anima is really good at regional prompting, specially when compared to illustrious, but im struggling to get the images to not look like slop
>>
Why can't you just use a larger text encoder with, say, anima, to improve how much it understands the prompt?
>>
>>108570500
Im running on a humble rtx 2080, but, imagine i don't have this limitation, which would you recommend?
>>
>>108570500
You can. But the question becomes how much will it improve at the cost of your own resources (since you'd have to retrain) in addition to the increased inference cost.
>>
>>108570484
>but im struggling to get the images to not look like slop
post you workflow / metadata
>>
>>108570511
Do you think it's worth it? I have 24GB VRAM so I have some room to spare.
>>
is WAN still king for local i2i? any workflows people can link me to? specifically for photo-realistic...
>>
>>108569503

Reply to Thread #108563476


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)