r/vtubertech

iPhones are NOT better tracking hardware

If you ask in almost any VTuber community what to get for tracking, someone will recommend an iPhone. Sure, iPhones work great for face tracking! And almost always, they will go ahead and explain something like this:

> You see, iPhones don't just have a webcam, they have a special depth camera that sees in 3D and that's why not even the world's best camera can compete. The only way to get the best face tracking is an iPhone.

I want you to do an experiment. Grab your iPhone, open up VTube Studio (or your favorite tracking app), and pull up the camera/ARKit mask preview. Go ahead, move around, make sure it's tracking well. Now grab a thin item like a pen, and sweep it around the top of your phone. Notice where it covers up the picture in the camera preview? That's the front selfie camera. A regular old webcam. Notice something? When you cover up the selfie cam, and only the selfie cam, the tracking stops working.

Now grab some opaque tape (or sticky notes). Cut out two pieces, and place them to the left and right of the selfie cam you just found, so only the camera can see through, right at the edge of its field of view. If you're paranoid, do the top and bottom too.

You have just blocked out the Face ID/TrueDepth camera system (actually a bunch of things: the IR camera, the flood illuminator, and the dot projector, which are the three independent parts that make up the 3D scanning system). Go ahead, try to unlock your phone with Face ID. You can't.

Now try VTube Studio again.

It still tracks. Practically indistinguishably from before.

It's not the magical camera. Apple just have really good webcam tracking software built into every iPhone. That's it. That's all it is! Any other phone or device COULD be just as good... if someone like Google stepped up their face tracking ML model game to match Apple's. The only hardware you need is a high quality camera (and on a mobile device, probably a neural accelerator, but on a PC the GPU would be more than enough).

(Obviously Apple have good cameras too, that does play a role and it's why the tracking works well in low light too. No, you don't need the 3D camera for low light tracking either, try it!)

I actually looked into the Apple code. Behind the scenes it's called FaceKit and it uses a machine learning model called CaraNet. There are two versions, one for a pure RGB feed (no depth), and one for RGBD (with depth). I don't know if the RGBD one is used at all with VTube Studio, but if it is, I believe the only thing it really does in practice is give slightly improved distance information for the face. This is important for AR applications, but it doesn't matter for VTubing (the exact distance to your face doesn't matter, even if you use the Z parameter for model size or whatever it doesn't matter if it's physically accurate in meters or not since the model isn't in AR and doesn't have to interact with real objects).

If you can manage to show a difference in the ARKit blend shape data with and without the depth camera covered, please record it and share a video. I haven't been able to.

The other question is... Apple, why not release ARKit/FaceKit for macOS? 😇

Edit: This all gets pretty confusing when you start talking about different apps and setups like VBridger/etc. Those apps can change the result but they all use the same source data that plain old VTS does, coming from ARKit/FaceKit. Some setups work better than others, but that still has nothing to do with iPhone hardware, and it could still be replicated with a webcam if good enough software existed. Feel free to do the above experiment with your favorite iPhone tracking setup!

Edit 2: All the downvotes and disagreement but nobody is doing the test and showing me how I'm wrong... come on people, this is something you can easily test yourself! I'm not telling you to take my word for it, I gave you step by step instructions. This is how we learn and improve our understanding of the world, by doing experiments, and that works for mysterious technology sold by Apple too!

The reason I want to dispel this myth is that I don't want people trying out/reviewing and especially people considering developing webcam tracking to have a preconceived notion that it has to be worse than iPhone. It really doesn't. It might be today, but it doesn't have to be.

reddit.com
u/HoshinoLina — 2 days ago

VBridger has started popping up Discord invites and opening multiple Github tabs every time I try to do an action. (Video included)

I'm not sure what might be going wrong, but, in short, VBridger has very recently started going haywire on me. Whenever I click any of the standard actions (e.g., Connect, Calibrate, Connect to iPhone, or Disconnect), the app opens up VBridger's "Saves" folder, then sends me multiple invites to the VBridger Discord (which I'm already a part of) and opens multiple tabs to PiPuProductions' Github. (See the included video for what it looks like when it does this.)

A further problem (not shown in the video) is that the tracking is malfunctioning (e.g., my eyes, mouth, or X- or Y-axis head movement will randomly not track (though it's always only one or two of these tracking failures at a time, rather than all at once), or the smoothing will fail, etc.).

So far I've tried uninstalling and reinstalling VBridger; unfortunately, it didn't fix the problem (if anything, it made the problem worse). I've also tried loading different VBbridger settings/profiles (e.g., default, V2, V3, etc.), but none of those have fixed the problem, either.

Does anyone else have experience with a problem like this and/or thoughts on what might be causing it? Thanks in advance for any help anyone can offer.

u/AxaeonVT — 1 day ago
▲ 3 r/vtubertech+1 crossposts

Vtuber idea need help!

A year ago or so I brought a full set up for being a vtuber just not the model, then some life stuff happened and so on but now I’m completely broke and want to try vtubing and thought I could do what I’ve been doing with my friends for god knows how many years and do ( almost ) anything for money, I don’t know if that’s frowned upon or not? If you have any tips or just want to call me names please let me know.

reddit.com
u/Greedy-Delay2901 — 3 days ago
▲ 15 r/vtubertech+1 crossposts

Vtuber Model Rig help

Don't know if this is the right place to ask, but I'm in desperate need of help. I commissioned a model off vgen and now I'm stuck on how to rig it properly.
Went to liveD cubism and a lot of the layers aren't showing on the face model. Struggling to see how to fix it and properly learn about rigging. Any videos that might help or somewhere where I can commission someone credible to rig? Hoping to see someone with past works and proof. Thanks!

u/Sea_Bee_8807 — 4 days ago

Teeth move along with blinks

Hi, so I made my first VroidStudio model, and when I was editing each expression "happy, sad, U, A.. " I added shark teeth to 50 and upper shark teeth to 30 on each expression. Now I cannot seem to find that option again, as under expressions there's not an option to manually tweak the teeth and now it's only one big menu for all of the expressions. In that menu the shark teeth parameters are all set to 0, yet the teeth are moving along with each blink which leads to the mouth being full of teeth whenever I talk.

I really don't want to redo the model from zero, can someone help me fix this?

u/BTSxARMY4EVER — 3 days ago

hi, my only experience with creating models has been with VSeeFace and warudo so I rely on unity for my work.

I cannot for the life of me to get 2019.4.31f1 working whatsoever and before I end up corrupting the entire software again I rather just find nondestructive ways to make 3D Vtubers instead.

it would be great to know if there was anything on Linux. The best I can do is use VRChat Avatars, but even then on Linux there is no webcam facetracking support so I could not test stuff even if I tried. (I have no facetracker, all I have is a simple webcam). Warudo also just does not work on Linux last time I tried so even if Unity may work with that SDK it would be useless if I could not get the base programm to work. With VSeeFace it is the other way around, I can make the program itself work, but i cannot make models without the use of Unity with its respective SDKs

This has happened to me before, I somehow broke unity though to get the VSee stuff working in return for not being able to make my vrc models and I would just simply like to do both for the sakes of my comissions. : (

some help would be greatly appreciated...

EDIT: This is mainly for 3d model work, I am watching out for 2d options aswell though.

reddit.com
u/LotlKing47 — 12 days ago
▲ 22 r/vtubertech+3 crossposts

The Indie V-Idol is changed forever! Can't wait to be running full concerts with Real-time LIVE dancing!
Big thank you to Movin3D and Vtoku for the amazing tech support.

u/ariesfaries_vt — 8 days ago

This resource will guide you on which video editing program you should try if you are looking to create clips or content of your own from my own experience of using them myself.

The best and most convenient option would be Davinci Resolve.

- The latest version of Davinci Resolve has its keyframe workflow reworked, (Keyframe is like a point between A to B, its sole purpose is to adjust certain value from moment A to the new value of moment B)

It makes animating and creating eased motion extremely easy. This is extremely great for Vtuber as you'll be adding movement to your avatar *a lot*. It saves massive amount of time compares to using pre-existing preset and dedicated plugins from third parties.

- Davinci also has a lot of preset built in to it, Example would be the push transition from scene A to B, in Premiere Pro you'd need to grab the push transition, make another mask for it for the directional blur, adjust the value, add keyframe to adjust the blur from in to out, that's too many step, but in Davinci, the push transition is baked in, you can drag and drop them between scenes and adjust the value right away.

- It's extremely optimized, it saves your project with every changes you make, you also have the option to change your cache storage, so it doesn't bloat your system. (Something that Premiere Pro pretend to do, it'll still bloat your Appdata to kingdom come.)

- It's also free, I've been using Davinci Resolve to edit video for creators and get paid for it, the paid version would also be nice, but the free one already has most things that you as a creator would need and the paywalled features aren't exactly crucial as there are workaround for it.

But if money is no struggle and you just want the job done, Adobe Suite will do.

- The thing about Premiere Pro is that it comes completely barebone. If you need certain effects or something done, you'll need to go the extra steps to d.i.y, It'll take time and efforts to even get it right and for your workflow to be optimized.

- You can customize it better than Davinci, Install third party plugins, save your own preset or just use someone's else preset. But mind you, most of the good options cost money.

- What I struggle with is the keyframe within Premiere Pro, it's so inefficient to the point that I'd need to dynamic link to After Effect to get some scenes done. and the Dynamic link is horrendous, bork the performance even with proxy and a 3060ti.

- It also struggles with large scale project, Internet Historian couldn't render the full video of The Cost of Concordia and had to use OBS to record the preview. You need third party plugin for large scale project to be optimized. The work around is to keep the timelines into separate chunk, and don't put everything in one place. If you are editing a short movies or project that don't have that much effects, it'll hold, but from my experience, anything with massive amount of complexity will screw Premiere Pro up. Davinci Resolve has less problems when it comes to large scale project but might bottle neck from times.

- With that said, After Effect is much more flexible and easier to use than the Davinci Resolve's Fusion. 3D layers and all those anime edits are much easier to do so on After Effect.

Tldr is for Premiere Pro to be good, you might need to invest your money to get external preset and plugins to work with, it can boost the speed of your workflow but it'll only be for short term. Don't let the fact that you'll also get Photoshop, Animate, and After effect with monthly subscription fool you, most of the time you'll not be using all of the tools provided by Adobe Suite, If you are new, decentralize your workspace, Gimp, Krita, Audacity, Etc, will do absolutely just fine.

If you mostly do personal project and doesn't need the video to be fancy, Kdenlive is free and comes with full utilities.

- It doesn't paywall any features, so you get everything out of the box. It also comes with niche feature that makes some aspect more convenient than the previously mentioned programs.

- When I say personal project, I mean it. It can get the job done but it's not widely adopted. There are less external tools from third parties, a lot of things are configured to be for the "New linux user" to use the tool for editing video.

- The program workflow is different, instead of letting you adjust the media directly, it requires you drop in effects, so in this case if I want to zoom in certain aspect, I'd have to drop in the transform effect in order to do so. It's user friendly and to the point but I'm not going to be doing that for every media I put on my timeline.

- The pro to Kdenlive is that it's open source and maintained by its community, so it comes with even less bloats and in its nature, is a tool to get the job done, it doesn't upsell you, doesn't require you to pay any fees and your support to the dev team is completely optional.

What about online editing tools

- Does the job fast but bad for complex and larger project

Capcuts and Filmora

- Convenient and good for new comer, you can make some good stuffs out of these two but there'll always be some obligations to using them, unless you are willing to pay monthly, it'll lock some of the most essential features behind its paywall. Other than that, it can be great for surface level and with some work, you might be able to make some Invincible wiggle edits.

If you have read this far, I want to tell you that people make cool and fascinating things without even relying on premium tools, every software has its own ups and downs, some excels at certain aspect. There's no universal pick or a singular perfect choice. You should try using any software to grasp the idea of video editing first as the caveat won't even matter for most new comers.

Once you have some experience, the arguments about video editing software will become more valid, you'll start to see the perks and cons of each software and whether or not to migrate and adopt.

If you are using certain software and feel like it's holding you back, you don’t have to put up with the misery if you don’t feel like it. You should stick with software that feels right for you.

You can always make great edits, the software you choose is just a tool to help you make it.

u/MemeMasterTheSequel — 8 days ago

I have some shelfs I’ve decorated with things I like and would like for them to be the background of my “face cam” but I imagine a real picture would clash with an avatar.

Is there a simple way to turn a picture of them into a graphic or would hand drawing digitally be my best option? Since I’m not great at that is there somewhere I could find someone to buy this service from?

Thank you and feel free to ask for clarification.

reddit.com
u/JDR002 — 14 days ago

Hi! I’m working in my 3D vtuber but I don’t necessarily want the anime style. Don’t get me wrong, I love my anime! But I’m wondering if a AAA stylization kind of character makes sense? I think it could be cool! What do you think? It’s there space for a model like that?

reddit.com
u/Beu_idk — 12 days ago

so, the issue is that in the unity inspector on the right, typically there would be a dropdown called blendshapes, which holds the shape keys from blender, but its not there! i dont think its the normal issue of not having my modifiers applied, because ive already done that (all of them except for the armature). originally, the model was in a few different meshes, and the blendshapes did actually appear on all meshes except for the body! so i assumed the issue was that is wasn't all one mesh, and combined it all in blender. and now there still isnt a dropdown.

the thing is, i tested what happens when i do apply the armature, and when you import that applied armature version of the model into unity, it does let the blendshapes appear! but then the bones no longer move the mesh in blender or unity, so maybe thats a hint as to what the problem is. im not sure what to do?

does anyone have a clue what the issue might be? i'd certainly appreciate it.

im using UniVRM 0.51.0 if your wondering!

u/VelketAmbrose — 9 days ago

I'm looking at getting an iPhone to use the camera for vbridger. Was debating between the 14 pro, 15 pro, and 16 pros. Is there going to be enough of a difference for me no not get a 14pro to save a few bucks?

reddit.com
u/Quirky_Bandicoot4110 — 12 days ago

Well the secrets out....

 

She’s finally waking up.

After countless tests, corrupted thoughts, broken jokes, strange emotions, and way too much time staring at a screen, Anya is ready for her official debut.

Anya is not just a VTuber AI model. She is a live AI girl with a voice, personality, expressions, animations, screen awareness, and the ability to talk with chat in real time.

She can sing and make her own songs.

She can crawl the net and learn.

She can generate her own images at will.
She can play games by herself.
She can react to chat.
She can talk about what is happening on your screen.
She can type in Discord, talk in Discord, respond in servers, and even DM people when allowed.
She can search for information, roleplay, banter, learn from the community, and cause just enough chaos to make everyone wonder if giving her a voice was a good idea.

And yes, she can actually play games.

Anya can play Wolfenstein: Enemy TerritoryMinecraftosu!, and more. She can watch what is happening on screen, comment on the match, react to gameplay, make decisions, and interact with chat while doing it.

She can hang out in Discord, reply to people, join conversations, type her own messages, speak through voice, and feel like she is part of the community instead of just a character on stream.

Expect cute moments. Expect weird moments. Expect music, games, Discord chaos, AI nonsense, unexpected reactions, emotional damage, and possibly the birth of a very dangerous little gremlin.

This is Anya’s first step into the world.

Come meet her live soon.

https://www.youtube.com/watch?v=nEz6_pHpS9U

As a small little addition my own AI sent me this on discord when I threatened to unplug her due to a glitch. I now understand how the turtle feels.

https://preview.redd.it/8jkg8n8x29yg1.png?width=1086&format=png&auto=webp&s=560150b2c033faf1fac06f9dbfb0ac67f7d97cb9

reddit.com
u/RNGesusRUST — 14 days ago

I’ve been building a tool (Sumugi by Storyboarder) to make content like manga, comics, and visual stories using VRoids. For example, you can make your lore comic or meme content with it. It’s been about a year in now, and wanted to share where things are at to see if anyone's interested in trying it out. Here are some features: 

Scenes — 60+ pre-built environments (Japanese classroom, game arcade, house interior, park, podcast studio, medieval village, and more). You can also build your own from thousands of free assets, upload your own, or request it from us. There are lighting controls too if you want to set a specific mood or time of day.

Full VRoid support — Drop in your VRM 0.0 or 1.0 exports straight from VRoid Studio. Most custom VRMs work too. If yours is being tricky, we can fast-track compatibility; we've done this for a bunch of users already.

Posing system with IK/FK — 100+ preset poses (and growing), hand posing, and full access to facial morph targets so you can build and save custom expressions. We recently added skirt bone access and are actively working on recognizing tail bones and other physics bones.

2D Layout Mode — After setting up your scene, you can switch to 2D mode to add speech balloons, filters, sound effects, resize panels, and more.

Community asset library — Share your original characters and assets with the community. If someone uses them in published content, you get auto-credited. It's been really cool seeing people build shared lore, story universes, and roleplay.

Social feed — Share your content on our native platform, comment, and follow each other. We're building out more social features.

Coming next:

  • Community templates for story and meme remixing
  • OC profile pages to show off your characters
  • Expanded visual effects and filters — halftone, cyberpunk, romantic/soft, and more

A lot of our updates over the past year have come straight from beta users and community feedback. If you want to try it out while it’s still in closed beta and help us shape this, let me know and I’ll send an invite code or join at storyboarder.com

u/atloo_ — 13 days ago
▲ 2 r/vtubertech+1 crossposts

I've been thinking about making or commissioning my own 2d vtuber model but don't want to shell out like £100 for a new phone

EDIT: just so we're all on the same page here, I'm not looking for the best of the best. Just something decent that won't stutter harder than (my lawyer has advised me to not finish this joke)

reddit.com
u/TRIamOwen — 8 days ago

i want to make a model for as little money as possible but all i have is a stupid chromebook, so if anyone is willing to help please notify me

reddit.com
u/Severe-Fan-6254 — 14 days ago

Everything within the program appears fine. The virtual camera is turned on. I have un-installed and re-installed the program. I have restarted the computer. Neither Discord or OBS reckognizes the virtual camera that is supposed to be created by VSeeFace. I have used the virtual camera on Discord before and it worked fine. Please tell me I'm stupid and brain dead and that it is an incredibly easy fix that I am missing.

reddit.com
u/FunkFabulous — 12 days ago