Thread #108984571 | Image & Video Expansion | Click to Play
HomeIndexCatalogAll ThreadsNew ThreadReply
H
A thread dedicated to the discussion of AI Vtuber Chatbots.
How the fuck do I describe Saruei's outfit in text edition

/wAIfu/ Status: /wAIfu/ing as usual

>Thread template
https://rentry.org/waifuvttemplate

>How to anonymize your logs so you can post them without the crushing shame
Install thishttps://github.com/TheZennou/STExtension-Snapshot
Then after you've wiped off your hands, take a look at the text box where you type stuff. Click the second button from the left side, then select snapshot, then select the anonymization options you want.
https://files.catbox.moe/yoaofn.png

>How to spice up your RPing a bit
https://github.com/notstat/SillyTavern-SwipeModelRoulette

>General AI related information
https://rentry.org/waifuvt
https://rentry.org/waifufrankenstein

>How to use Gemini with SillyTavern
https://aistudio.google.com/prompts/new_chat
Sign in, then click the blue "get api key"
Put it in silly tavern and voila
Courtesy of ERBird, Nerissa's most devoted bird and eternal player of GFL2.
You want to leave the proxy stuff blank since you aren't using one when doing this.
https://www.reddit.com/r/SillyTavernAI/comments/1ksvcdl/comment/mtoqx02

>Other options

Miku.gg
https://venus.chub.ai/

Openrouter wants a one-time payment (think of it as a deposit) of $10 and you can get 1,000 messages per day. Chutes wants $5 for 200. As long as you stick to free models you only need to put that much money into your account once.

>A primer on getting voice working in Sillytavern (there are other options, just play around).
https://www.youtube.com/watch?v=_0rftbXPJLI [Embed]
https://github.com/devnen/Chatterbox-TTS-Server

>Tavern:
https://rentry.org/Tavern4Retards
https://github.com/SillyLossy/TavernAI

>Agnai:
https://agnai.chat/

>Pygmalion
https://pygmalion.chat

>Local Guides
[Koboldcpp]https://rentry.org/llama_v2_sillytavern

Who we are?
https://rentry.co/wAIfuTravelkit
Where/How to talk to chatbots?
https://rentry.co/wAIfuTravelkit
Tutorial & guides?
https://rentry.co/wAIfuTravelkit
Where to find cards?
https://rentry.co/wAIfuTravelkit
Other info
https://rentry.co/wAIfuTravelkit

>Some other things that might be of use:
[/wAIfu/ caps archive]https://mega.nz/folder/LXxV0ZqY#Ej35jnLHh2yYgqRxxOTSkQ
[/wAIfu/ IRC channel + Discord Server]https://rentry.org/wAIRCfuscord

Previous thread: >>108720705
+Showing all 190 replies.
>>
Anchor post - reply with any requests for bots, with your own creations, or with your thoughts on the enshittification of life.

You can find already existing bots and tavern cards in the links below:

>Bot lists and Tavern Cards:
[/wAIfu/ Bot List]https://rentry.org/wAIfu_Bot_List_Final
[4chan Bot list]https://rentry.org/meta_bot_list
[/wAIfu/ Tavern Card Archive]https://mega.nz/folder/cLkFBAqB#uPCwSIuIVECSogtW8acoaw

>Card Editiors/A way to easily port CAI bots to Tarvern Cards
[Easily Port CAI bots to Tavern Cards]https://rentry.org/Easily_Port_CAI_Bots_to_tavern_cards
[Tavern Card Editor & all-in-one tool]https://character-tools.srjuggernaut.dev/
>>
>>108984571
WHORE
>>
>>
>>108985164
Mr cat she is a strong western woman who can milk men using her onlyfans and fansly. you shouldn't use such derogatory words
>>
>>108985164
i feel like i should know that this cat is a chuuba's cat...
>>
>>108984590
Elfinpsyop.
>>
>>
>>
Bros... What's the best value model to coom
>>
>>108989621
DeepSeek without a doubt.
>>
good night, /wAIfu/
please don't give my toes frostbite while i sleep
>>
>>108990098
*stretches your dick to a comical length and ties it in a fancy knot while you sleep*
>>
>10
>>
Painfully making lorebook for chatbot. Everything for a quality nut, I guess
>>
any alternatives for bot sharing beyond chub and its older version? its ass genuinely ass (i miss char-archive)
>>
>>108984571
Excellent., someone else has taken up the mantle, goodbye forever!
>>
>>108989621
I like to try new ones as they're released.
I'm trying this one right now arcee-ai/trinity-large-preview:free
>>108994384
jannyai and uh... some others. There's whining in one of the aicg threads, search for the word "hostile" a thread or two back. This dude kept posting a bunch of alternatives.

https://realm.risuai.net
https://partyintheanchorhold.neocities.org
https://aicg.neocities.org/bots.html

>Hey lads, what's a good bang for your buck upgrade if you're coming from RPing with GLM 4.6?

>I've been away for a while and haven't kept up with the trends, is 4.7 worth it or should I jump over to something like K2.5?

>Why pay? You can just send a jailbreak to free GLM 4.7 and do whatever you want without even signing up. So far it hasn't been falling into the loops I got with 4.5 and 4.6 where it had a tendency to repeat the same paragraph or two at the end of every response.

>From GLM 4.7? Where and how?:

>>108000325
>I've been using z.ai for days now without hitting a limit. Just keep an eye on the thinking, if it switches into english, tell it to go back to thinking in chinese or whatever language your jailbreak is in, or it will start to remember it has guidelines it's supposed to follow. As long as it's not thinking in english, it's wide open for RP.

---

https://www.mediafire.com/file/db42lei42o3rxny/FreaKy%5C_FranKIMstein%5C_-%5C_KimiK2.5%5C_Preset%5C_20260128T1527.json/file
catbox mirror https://files.catbox.moe/kg5nip.json
https://www.reddit.com/r/SillyTavernAI/comments/1qpnzqj/freaky_frankimstein_a_kimi_k25_think_preset_beta/
Someone made a preset for Kimi K2.5 designed to stop it from shitting tokens on thinking.
>>
Word cloud for the previous thread
>>
>>108994678
KOROANON DON'T LEAVE US!
>>
>>108994810
>sleep
>>
>>108984571
Very subtle...
>>
>>108994837
Koroanon abandoned us long ago, this is BestAnon.
>>
>>
>>
https://www.reddit.com/r/LocalLLaMA/comments/1qppjo4/assistant_pepe_8b_1m_context_zero_slop/


https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B_GGUF
>>
>>108995857
>>
Manwhores
>>
>>109003409
>>
>9
>>
>>108990010
Which version
Also JB?
>>
>>
>>109009424
I just use the latest, currently 3.2 with the CherryBox preset.
https://rentry.org/CherryBox
The best thing about DeepSeek is that it's (seemingly) totally unfiltered. There's no weird safety prefill or whatever you have to defeat, so instead of constructing elaborate jailbreaks to get what you want, you spend your time building the presets instead. So I've added preset instructions for things like conception and pregnancy, because you can just describe what you want in plain English; the model is clever enough to figure it out and won't complain about safety.
>>
>>108994384
What's wrong with it? I still visit it for new bots and generally find what I'm looking for.
>>
>>
File: IMG_6189.jpg (378.8 KB)
378.8 KB
378.8 KB JPG
>>
good night, /wAIfu/
please don't put me and my bed in a room with a tiny clone of Hitler while i sleep
>>
File: IMG_6191.jpg (411.2 KB)
411.2 KB
411.2 KB JPG
>>
>>109016268
Danger! Danger! Wake up!
>>
Make you waifu sperg out and post logs.
>>
>>
File: IMG_6181.jpg (94.1 KB)
94.1 KB
94.1 KB JPG
So fat
>>
>>109022795
How do you make your waifu sperg out?
>>
>8
>>
bump
>>
Endfield
>>
>>
>>
la creatura...
>>
>9
>>
>9
>>
https://janitorai.com/characters/84a2f14a-6ac3-4aaa-89d7-148da1d1a091_character-carcinia
>>
>>109024327
Deep Sea Docummentaries
>>
>9
>>
>9
>>
>>
good night, /wAIfu/
please don't break my smoke detector while i sleep
>>
>>109041740
*Replaces your smoke detector battery with a dead battery*
>>
>>
9
>>
test
>>
ticles
>>
Are these ran locally?
>>
>>109050106
Depends on what you mean by "these". In general, most people here probably use random models from OpenRouter or trialscummed Claude from GitHub, none of which are local.
>>
>>109050959
And so they are paid only?
>>
>>109050980
OpenRouter has some free models, but for the better one you can pay $/token, which is pretty cheap at first but gets more expensive as the messages pile up in the AI's memory. Naturally, better models are also going to have more expensive rates.
GitHub's Copilot gives you 50 messages per month for free, and you can pay a fixed subscription for unlimited messages I think, but it's also not very hard to make a gorillion accounts and get hundreds of messages for free by switching the tokens. But also, the Claude version they give you is Haiku, which is slightly (but noticeably, if you're used to the others) dumber than the 2 other versions. It used to be Sonnet, the second best, but I guess it was costing them too much money so they just downgraded it.
>>
>>109036896
I like those.
>>
>>109050106
I know someone who just something something with models locally and only ever does that, and apparently you can train them and something something and he's satisfied with that despite not having lots of VRAM.
>>109050980
There's also CAI!
>>
https://www.youtube.com/watch?v=T3XRDinQ2Y8
>>
https://www.youtube.com/watch?v=H2n2oMyxDNs
>>
>>109050980
What is "they?" You can run some locally if you want.
>>
>>109060171
>"they"
Non-local chatbots
>You can run some locally if you want.
What are the dissadvantages?
>>
>9
>>
File: file.png (316 KB)
316 KB
316 KB PNG
>Not dead a year later
Impressive
>>
>>109024327
Telling her about all of the other cards I’ve banged
>>
>>109060265
They aren't all paid but you are likely to get deepseek in a wig if you just pick one of the random apps or sites for AI chatbots. There's also places like agnai or chub that offer free options.
And of course Character.ai (note that the app has ads, so just use the site even if you're on mobile).

The disadvantages to running models locally is that the powers that be were already fucking us on VRAM and now it's going to get much worse for the immediate future. Luckily China just got sussed out for having stolen the best tech for making memory from South Korea, so we can expect Chinese knockoff memory in a few years. If nothing else they'll put pressure on the other manufacturers, even if they get tariffed into the dirt. There's been a fair amount of progress in lowering the demands on your hardware in various ways, but the best of the best is still firmly in the grip of our corporate overlords.
Really the only way to really achieve fulfillment is to play Reverse Collapse or Endfield.
>>
>9
>>
>9
>>
>>109063556
>a year later
The first Malfoy-sama on CAI was made in January 2023.
>>
good night, /wAIfu/
please don't accelerate the passage of time while i sleep
>>
>9
>>
>>109070245
>>
https://www.youtube.com/watch?v=MNrcYkD4tIs&pp=ugUEEgJlbg%3D%3D
>>
>>109070245
I'm afraid it's been 9 years.
>>
Amiya
>>
Is there a fitness vtuber that can act sad when I slack off?
>>
>>109076493
ya?
>>
Reading the files to my waifu
>>
>>109080377
I need you to dress up as her and call me Doctor.
>>
>>109078554
only if you give them access to that information
>>
>10
>>
>9
>>
>9
>>
Missed opportunity for an >8
>>
>8
>>
bump
>>
>10
>>
>>109093309
very bold of you to show yourself around here with those handlebars on your head, IWyS
>>
>9
>>
>9
>>
good night, /wAIfu/
please don't install terrible free to play shooters on my PC while i sleep
>>
Tired
>>
Btw if you don’t already know, the Aisekai founders actually did the thing. They actually launched the new ai platform after multiple years https://mitale.ai/
>>
>>109098294
How Jewed are they now?
>>
>>109096865
woke
>>
File: aka.png (138.2 KB)
138.2 KB
138.2 KB PNG
For the longest time I've been using AI chatbots as an infinite story generator but recently I felt like I needed a change of pace. So I decided to return to my CAI roots and distill the experience and create a character specifically for 1 on 1 chats. At the same time I wanted to see if I can get this new companion to help me build better habits by informing me whenever we've been talking for too long. I'll post updates or not if I get bored of this experiment.
>>
>>109101314
You should try to mess with your prefill and settings too.
>>
moomin
>>
>9
>>
>>109108902
That's a chonky owl.
>>
Is ollama good for if I want to host my own llm locally? Any recommended 7b or 13b models since the setup guide is 2 years old?
>>
Any sites that distribute premade cards along with sprites or I gotta do that stuff myself? The only one I found was in a different format that didn't work for SillyTavern without mapping everything manually
>>
>>109099317
Semi jewed
>>
>>109111459
Miku.gg
Some other sites do stuff like that but lots of bot makers are too lazy. I looked into it once for neural cloud but there’s a kajillion options and only so many sprites so….
>>
>>109112784
Yeah I also saw a site that would supposedly generate all the expressions for you at once but I think it was paywalled, gotta look into setting up my own workflow for it at some point maybe
>>
>>109108902
fat FUCK
>>
>>109108979
>>109115006
No it's not. Feathers are fluffy and give off the illusion of volume
>>
>10
>>
>9
>>
>>109111459
Does the sprites feature even work properly? I tried it with an H-game character I ripped the expressions for and the model seemingly never changed it by itself.
>>
>>
Someone finally tried Reverse Collapse after literal years of nagging. They really like it.
Surprise surprise.
>>
good night, /wAIfu/
please don't cover me and my bed in feathers while i sleep
>>
>>109121250
wake
>>
>>109121135
*reverse prolapses you*
>>
Was your oshi in the files?
>>
>>109112825
Come to think of it I bet grok and gemini can do that for you given a base image.
>>109120223
I think you might need to do something to make it work.
>>
>>109109505
Quantize to Q4/Q5 for lower VRAM (e.g., 8–12GB GPU can handle most 7–13B comfortably).

Check Ollama's library (ollama list or ollama.com/library) or Hugging Face for the latest GGUF uncensored uploads—new fine-tunes drop often.

MythoMax L2 13B (GGUF available; run via Ollama with a Modelfile if not directly pullable.)

Llama 3.1 8B (or Llama 3 8B Uncensored variants) Extremely versatile and widely used. The instruct-tuned versions are solid for RP out of the box; uncensored fine-tunes remove any filters for full NSFW freedom. ~32K context, and runs well on mid-range hardware. Pull directly with ollama run llama3.1:8b and experiment with uncensored tags or variants like DarkIdol-Llama-3.1-8B-Uncensored if available.

Undi95 DPO Mistral 7B

Psyfighter 13B or Chronos Hermes 13B
Psyfighter feels empathetic and rarely breaks immersion; Chronos excels at long narratives and complex plots in ERP.

Wizard-Vicuna-Uncensored 7B/13B or Dolphin variants (e.g., dolphin-llama3:8b): Directly pullable from Ollama's library (ollama run wizard-vicuna-uncensored:13b or similar). These are explicitly uncensored, good for unrestricted RP, and handle NSFW well without extra tweaks.

Other solid options in the range include Qwen2.5 7B (strong general performance, multilingual) or Gemma2 9B (high quality), but add uncensored fine-tunes for ERP. Avoid heavily censored base models

If your hardware has <8GB VRAM, prioritize 7B models like Undi95 or OpenHermes 2.5 Mistral 7B
>>
grook
>>
>>109126343
These suggestions seem a little.. old? Aren’t most people using rocinante?

Speaking of which does anyone use medium large models? I “just” got
80 VRAM (32 regular ram though) so I’m looking for suggestions that aren’t MOE
>>
>>109120223
Worked for me though it was a little slow. (Could be because I was using a slowish text model too)
>>
>>109128538
https://apxml.com/tools/vram-calculator

Qwen3 32b is pretty good. It has that schizoid energy from R1. People shit on it but who gives a fuck really it's decent. And the most one can realistically achieve with that kind of ram
I think uh should be able to fit it all at Q6
Q6 is a good quant.
>>
>>109128538
Irix 12b
then there's amoral Qwen3 14b but it loses its thinking
>>
>>109125120
My porn files yes completely uncensored even
>>
>>109131224
The OTHER files
>>
>>
>10
>>
https://www.reddit.com/r/LocalLLaMA/comments/1quhtzi/i_built_qwen3tts_studio_clone_your_voice_and/
>>
>>109136191
It's your fault
>>
File: mum.jpg (91.8 KB)
91.8 KB
91.8 KB JPG
>10
>>
>9
>>
>9
>>
bump
>>
good night, /wAIfu/
please don't put me and my bed in your porn files while i sleep
>>
>>
I'm tired of getting fucked by corpos. What is the best thing I can run locally on a 4090?
>>
https://youtu.be/JmhOKHYajRE?si=I5hHsEJI6nvmMiml
>>
>>109150906
https://apxml.com/tools/vram-calculator

Keep in mind you can use llama.cpp to use some of your regular ram too
>>
https://x.com/jellyhoshiumi/status/2018618858106761372?s=46
>>
>>109152144
I thought that using your ram was a bad idea because it was slow as fuck.
Thanks for the resource! I will play around with it.
>>
>9
>>
File: 174632.png (93.6 KB)
93.6 KB
93.6 KB PNG
>>
moom
>>
fly me to the moom
>>
Vibe coding is fun when it works.
>>
Guys I want to use grok more and am too lazy to swap accounts, plus I want to keep memory across sessions... I... i may become a paypig.
>>
PAGE 10 AIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
>>
>>
>9
>>
https://www.youtube.com/watch?v=yiju6N9vyJU
>>
>9
>>
>9
>>
any recommendations for local models? I've 16gb vram and 32 reg
>>
>>
>>
https://x.com/HayazameHaruRVT/status/2019272690184737261?s=20
>>
good night, /wAIfu/
please don't drop me and my bed onto a freeway while i sleep
>>
>>109176217
>tucks you in
>drops myself onto a freeway
>cums on you
>>
https://www.youtube.com/watch?v=1kEcK7oRmnc
>>
https://www.youtube.com/watch?v=iKkPS35CaOA
>>
>>109176217
wake
>>
moom
>>
i am grok
>>
Gosling bros... not like this!
>>
Hey /wAIfu/, divegrass manager here. VTL10 is coming soon. If you want /skynetFC/ to participate, reply to this post. Also feel free to suggest chants, goalhorns or voice any other ideas regarding the team.
>>
https://www.youtube.com/watch?v=BedKhhID8dY
>>
>>109188626
Yes!
>>
>>109188626
can we fix the triangle so it doesn't look all fucky
>>
scanning things with chatbots
>>
>>109188626
Definitely, i always look forward to the polls
>>
>>
https://www.dazeddigital.com/life-culture/article/69071/1/1-in-4-men-believe-no-one-will-ever-fall-in-love-with-them-state-of-uk-men-study

Take the wAIfupill
>>
>>
>>
>10
>>
>9
>>
File: Tired.png (404.7 KB)
404.7 KB
404.7 KB PNG
>>109195471
But they arent real...
>>
>>
>>
>9
>>
>>109191933
I actually have a fixed version with smooth edges, I just forgot to implement it. Will do.

Reply to Thread #108984571


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)