This is a transcript of The Resonance from 2024 December 1.
00:12: Waaah...
00:19: Hello...
00:20: Well, wait...
00:24: I've actually clicked it on the menu.
00:28: Hello JViden4.
00:33: Hello everyone.
00:36: I'm just making sure everything is running, got all the stream stuff.
00:52: Make sure my audio is all good, can you hear me fine?
00:56: Well, I was loud by 1.3 seconds.
00:59: Also thank you for the cheer Nitra, that was quick.
01:05: Okay, so...
01:06: Let me make sure...
01:12: Channel, there we go.
01:14: Everything should be going.
01:15: So hello everyone.
01:18: Oh boy, I'm getting feedback, there we go.
01:21: Hello everyone, I'm Frooxius, and welcome to the third episode of The Resonance.
01:26: Oh my god, thank you so much for the cheers Emergence and Temporal Shift.
01:30: Hello everyone.
01:32: So, this is the third episode of The Resonance.
01:36: It's essentially like a combination of office hours and podcasts.
01:40: Where you can ask anything about...
01:45: Anything about The Resonite, whether it's its development, philosophy, how things are going with development, its past, its future, where it's heading.
01:54: The goal is to have a combination of Q&A, so you can ask questions.
01:58: Whatever you'd like to know, I try to answer the best of my ability.
02:03: And we also have...
02:05: I'm hearing that the microphone sounds windy, let me double-check that OBS didn't switch microphone on me again.
02:13: Test 1-2. It's in front of my face.
02:18: Properties. Yeah, it's using the right one.
02:22: Is it understandable?
02:25: It's really strange.
02:28: Like, it shouldn't be compressing.
02:31: It's a wireless microphone, but it's an un-custom thing.
02:35: But, anyway, let me know if the voice is OK if it's understandable.
02:42: Anyway, you're free to ask any questions, and I'm also going to do some general talking about the high-level concepts of Resonite, where it's past its, what its future is going to be, which direction we want to head it, and so on.
02:59: One thing, if you want to ask questions, make sure to put a question mark. Actually, let me double-check.
03:07: Oh, I didn't save the version with the auto-add.
03:11: Auto-pin. There we go. OK, now it should be working.
03:15: Make sure to end your question with a question mark, the way it kind of pops on my thing.
03:21: And we already have some light popping up.
03:23: So, with that, we should be good to get started.
03:26: I might switch this camera, just like this. There we go.
03:30: So, hello everyone. Hello, I'm bad at names. Hello Trey Bourg. Hello Lexavo.
03:37: So, we actually got the first question from Lexavo.
03:40: Do you think you'll have any other guests on your show that might focus on specific topics?
03:47: It's possible. Like, the goal of this is kind of like, you know, sort of like my kind of office hours.
03:57: So, it's probably going to be like the main focus, but I'm also kind of playing with the format a bit.
04:03: Unfortunately, Cyro couldn't, like, I usually have Cyro co-hosting because we cannot have good back and forth.
04:07: It's on the technical things, and sort of like, you know, the fields of your Resonite.
04:14: But I kind of like, I don't see where it kind of goes, because I had like some ideas,
04:19: and for the first two streams that I did, there was like so many questions that we didn't really get much to the chill parts of it.
04:26: So we're going to kind of like explore that.
04:28: We might have like some other people as well, kind of talk to them about like specific stuffs of Resonite,
04:33: but I don't have any super specific plans just yet.
04:40: So we'll kind of see, at least for starters I kind of want to keep things simple, you know, take it like essentially baby steps.
04:49: But I feel like probably at some point I'll start bringing in more people as well,
04:54: so we can kind of talk about like how some stuff has been going and so on, but I'm still kind of figuring things out.
05:02: So the next question is from Emergence at Temporal Shift, what is the funnest particle?
05:12: And it kind of depends, like I don't know if you can get like a, maybe, like the thing that comes to mind is like you know particles that don't make sounds,
05:20: but that's one of the things you cannot do right now.
05:24: You make like a particle that does like some kind of funnest plot sound or goes like bling every time, is it like you know bounces or something like that.
05:34: There's actually kind of interesting thing because like it's a request we got in the past, as you know I'm kind of going off attention right now already,
05:42: where you can, where people want to like you know particles that make sound when they collide with something, or they generally want events so they can react.
05:51: The only thing is particles, the simulation runs locally on each person, so it's not like 100% in sync.
05:59: The people will be seeing similar things but not exactly the same.
06:03: So like for, you may have a clump of particles and one goes like, one might you know go like this way and for the other person it goes this way.
06:12: So if you have like one person handling the events, then something might happen you know, like say like for me the particle hits here and for the other person it hits here.
06:21: So if you do some event at this point, then it's going to be like a bit of a disconnect for the users, because for them the particle hit over here or maybe just missed.
06:32: And it's kind of an interesting problem, and one way to kind of approach that kind of thing is make sure your effects are all the things that are local.
06:39: So like for example local sound, that way everybody you know, say if I do like bubbles, and if you like you know bop them and they pop, you will hear the pop sound and it's going to be kind of localized mostly to you.
06:51: And if it's like similar enough, it's just going to be close enough that people will not notice.
06:57: But it's a curious problem, and kind of a little bit of tension for the question.
07:06: So the next one we have JViton4 is asking, I was confused by the stress test. The announcement said it was around on .NET 8 as opposed to .NET 9. Was it a typo or was it meant to establish a baseline?
07:18: So it was kind of neither. We have a .NET 9 headless ready. We kind of wanted to push it, like give people time to prepare.
07:29: But under the theme running the event, they decided to go with the .NET 8 for the testing.
07:35: And the main test wasn't even the performance of the headless itself, it was to make sure that the hardware is running on and the connections is running on is able to handle the event.
07:47: This wasn't as much testing the performance of the headless itself, but more kind of a combination of hardware and make sure we are ready for the event with the setup we have.
07:57: And it was also one of the reasons because we wanted the event to be as flawless as possible.
08:03: The theme decided to go with .NET 8 because .NET 9 technically isn't released yet, so we kind of stuck with that.
08:14: Even though the probability of it breaking things is very low, even if it was a 1% chance that something might break, we wanted to eliminate that 1%.
08:29: GlovinVR is asking, is there any focus on actual tutorial on Resonite? Do new users suffer from not understanding the dashboard, how to find avatars, and how to find people?
08:37: This seems like an easy thing that can be delegated out and does not need to take up any of your bandwidth.
08:41: Yes, that's actually something that the content team is largely working on. They've been looking at ways to improve it with their experience and overhaul some parts of it.
08:50: Even the existing experience that we have has already been mostly delegated.
08:57: It's part of that, because getting new users to the platform, it crosses lots of different systems.
09:04: Because when you start up, we want to make sure that the user has an account, that their audio is working.
09:12: So we guide them through a few initial steps, and that's a part that's more on the coding side.
09:21: If there's problems with that part, or there's something we need to add to that part, then it needs to be handled by our engineering team.
09:27: Then the user gets brought into the in-world tutorial that explains things, and it's mostly handled by the content team.
09:34: We also had Ariel, who's our new person handling all the marketing stuff and development, communications, and so on.
09:44: She's been looking, because she's relatively new to Resonite as well, so she's been using that perspective to be like,
09:51: this is what we could do to smooth out the initial user experience, and she's been talking with the content team.
09:58: And we're looking at how do we improve that experience to reduce frustrations for new users.
10:07: And there's definitely a lot of things you could do.
10:09: The only part is it's always difficult. I won't say it's a simple thing to do,
10:17: because you always have to balance how much information do you give the user at once.
10:23: And in the past, we tried approaches where we told them about everything.
10:26: There's inventory, there's contacts, there's this thing, and this thing, and this thing.
10:31: And what we found ends up happening. A lot of users, they get overwhelmed, and they just shut down,
10:40: and they don't even understand the basic bits. They will not know how to grab things, how to switch locomotions.
10:46: So you kind of have to ease the users into it, build simple interactions, and building those kinds of tutorials takes a fair bit of time.
11:00: There's other aspects to this as well. For example, one of the things that people want to do when they come in here,
11:05: they want to set up their avatar. And the tricky thing with that is that it requires use of advanced tools,
11:13: like the developer tools and so on, it requires use of the avatar creator.
11:17: And the avatar creator is something we want to breathe right, but that's an engineering task.
11:22: That's not something that the content team can do right now.
11:27: So there's a lot of aspects to this. They can build a better tutorial for some things, but some things do require some engineering work.
11:36: And we kind of have to design things in a way that we don't also want to avoid wasting too much effort on certain things,
11:43: because we know we're going to rework stuff like the UI, the inventory UI is going to be reworked at some point.
11:52: So then it becomes a question of how much time do we invest into the tutorial for the current one when we're going to replace it.
11:57: So some of those parts, we might just do a simple tutorial and do a better one later on once the engineering task comes through.
12:07: So there's just a lot of complexities to these kinds of things and it takes time to improve them.
12:16: What helps us the most is getting information about what are the particle frustration points for new users.
12:23: If somebody new comes to the platform, what do they get stuck on? What do they want to do? What's their motivation?
12:30: Because if we even know the user wants to set up their avatar, we can be like, okay, we're going to put things that direct them in the right direction.
12:39: But also with the avatar setup, there's always a combination of how do we make the tooling simpler so we don't need as much tutorial for it.
12:49: Because one of the things we did a few months back is introduce the Resonite packages.
12:54: And if there exists a Resonite package for the avatar that the user wants to use, they just drag and drop, it makes the whole process much simpler.
13:01: We don't have to explain how to use the developer tool, how to use the material tool, we usually just kind of drag and drop and you have a simple interface.
13:10: But that doesn't work in 100% of the cases, so it's a particularly challenging problem.
13:19: It's something we do talk about on the team, it's something that's important to us.
13:24: We want to ease in the onboarding of new users, make them as comfortable as possible.
13:29: And we're kind of working at it from different fronts, both from engineering and from the content side, as well as marketing and communications.
13:41: Let's see...
13:41: Jack the Fox author is asking,
13:43: My question for today is about ProtoFlux. In what direction do you want the language to evolve going forward? What future language features are you looking forward to?
13:52: So there's like a bunch...
14:06: There's actually...
14:08: One of the...
14:12: One of the things about...
14:16: The way I view visual scripting is that it has its drawbacks and has its benefits.
14:24: And one of the drawbacks is that when you write really complex behaviors, it gets a lot harder to manage.
14:32: Where typical text-based programming language might be simpler.
14:37: But one of its benefits is that you literally...
14:39: It's very hands-on.
14:41: You literally drag wires. If I want to control these lights, I just pull things from this and I drag wires.
14:47: And it has a very hands-on feeling. It's very spatial.
14:53: And the way I imagine the optimal way for this to work is to actually be combined with a more typical text-based programming language.
15:06: Where if you have a lot of heavy logic, a lot of complex behaviors...
15:16: It's much simpler to code things that way.
15:22: But then if you want to wire those complex behaviors into the world, that's where visual scripting can come in handy.
15:29: And I think we'll get the most strength by combining both.
15:34: And the way I wanted to approach the typical text-based programming is by integration of WebAssembly.
15:41: Which will essentially allow you to use lots of different languages, even languages like C and C++.
15:50: With those you can bring support for other languages like Lua, Python, lots of other languages.
15:57: Write a little bit complex code, and then some of that code might be exposed as a node.
16:01: And that node you kind of wire into other things, you do maybe a little extra operations.
16:05: It's almost like, if you're familiar with electronics, it's almost like having an integrated circuit.
16:12: And the integrated circuit, it has a lot of the complex logic.
16:16: And it could be written in a typical language, compiled into a WebAssembly module.
16:23: And then the integrated circuit is going to have a bunch of extra things around it that are wired into inputs and outputs.
16:29: And make it easier to interface with things.
16:36: So to me that's the most optimal state, where we have both.
16:40: And we can combine them in a way where you get the strengths of each, and weaknesses of neither essentially.
16:49: That said, there are definitely things we can do to improve ProtoFlux.
16:53: The two big things I'm particularly looking forward to are nested nodes.
17:00: Those will let you create essentially package-like functions.
17:03: You'll be able to define...
17:06: If I... I kinda wanna draw this one in, so...
17:10: I should probably have done this at the start, but...
17:17: I kinda forgot...
17:22: Let's see...
17:23: If I move to the... If I end up moving... This is probably gonna be too noisy visually.
17:30: I gotta pick it up.
17:32: And let's try this. I'm gonna move this over here.
17:41: So...
17:43: Make sure I'm not colliding with anything.
17:46: So the idea is you essentially define a node with your set of inputs.
17:56: And this is my thinking for the interface.
18:00: So this would be your inputs.
18:03: So for example you can have value inputs, you can have some impulse inputs.
18:07: And you have some outputs. It can be values as well as impulses.
18:14: And then inside of the node you can do whatever you want.
18:20: Maybe this goes here, maybe this goes here, this goes here.
18:24: And this goes here, and this goes here, and here.
18:27: And then this goes here.
18:29: Or maybe this goes here, and this goes here.
18:33: And once you define this, you essentially, this becomes its own node that you can then reuse.
18:41: So you get like a node that has the same interface that you defined over there.
18:49: And this is sort of like the internals of that node.
18:52: And then you can have instances of that node that you can use in lots of different places.
18:58: With this kind of mechanism, you'll be able to package a lot of common functionality into your own custom nodes and just reuse them in a lot of places without having to copy all of this multiple times.
19:14: Which is going to help with performance for ProtoFlux, because the system will not need to compile essentially the same code multiple times.
19:22: But it'll also help with the community, because you'll be able to build libraries of ProtoFlux nodes and just kind of distribute those and let people use a lot of your custom nodes.
19:34: So I think that's going to be particularly big feature for ProtoFlux on this kind of lens.
19:41: It's something that's already supported internally by the ProtoFlux VM, but it's not integrated with FrooxEngine yet.
19:51: There's another aspect to this as well, because once we have support for custom nodes, we can do lots of cool things where this essentially becomes like a function, like an interface.
20:04: So you can have systems like, for example, the particle system that I'm actually working on.
20:11: And say you want to write a module for the system, the particle system could have bindings that it accepts.
20:22: The essentially accepts any node that, for example, has three inputs.
20:29: Say, for example, position, lifetime, that's how long the particle has existed, and say direction.
20:44: And then you have output, and the output is a new position.
20:50: And then inside you can essentially do whatever math you want.
20:56: And if your node, if your custom node follows this specific interface, like it has these specific inputs, this specific output, it becomes a thing.
21:05: You can just drop in as a module into the particle system to drive the particle's position, for example, or its color, or other properties.
21:15: And you'll be able to package behaviors and drop them into other known ProFlux functions, and have essentially a way to visually, using the visual scripting, define completely new modules for the particle system.
21:32: But it expands beyond that. You'll be able to do procedural textures.
21:36: Like one node that you might be able to do is one with an interface where you literally have two inputs. Or maybe just one input even.
21:45: Say the UV, that's the UV coordinate and texture, and then a color.
21:53: And then inside you do whatever, and on the output you have a color.
21:59: And if it follows this kind of interface, what it essentially does is you get a texture that's like a square.
22:09: For each pixel, your node gets the UV coordinate and it turns it into a color.
22:15: So if you want to make a procedural texture where each pixel can be computed completely independent of all others, all you need to do is define this.
22:25: Make sure you have UV input, you have color output, and this whole thing can become your own custom procedural texture.
22:33: Where you just decide, based on the coordinate you're in, you're going to do whatever you want to compute pixel color and it's just going to compute it for you.
22:42: And with this, it will also fit in a way that this can be done in a multi-threaded manner.
22:49: Because each pixel is independent, so the code is generating the texture, you can call this node in parallel.
22:58: This is going to be more complicated once. You'll be able to do your own custom procedural meshes, for example.
23:09: The difference is probably going to be a little bit more complicated, because you'll have to build the geometry.
23:15: But essentially, the way that one might work is you get an impulse, and then you do whatever logic you want to build a mesh, and now you have your procedural mesh component.
23:26: And you can just use it like any other procedural component.
23:30: I think once this goes in, this is going to be a particle-powerful mechanism.
23:36: A lot of systems that don't have much to do with ProtoFlux right now, they will strongly benefit from it.
23:44: So this to me is going to be a really big feature of ProtoFlux.
23:51: The other one that I am particularly looking forward to, especially implementing it and playing with it, is the DSP mechanism.
23:59: And what that will let you do is make sort of workflows with the nodes to do stuff like processing audio, processing textures, and processing meshes.
24:12: With those, you'll be able to do stuff like build your own audio studio, or music studio.
24:20: Where you can do filters on audio, you can have signal generators, and you could pretty much use Resonite to produce music or produce sound effects.
24:31: Or you could use it to make interactive audio-visual experience.
24:37: Where there's a lot of real-time processing through audio, and you can feed it what's happening in the world, and change those effects.
24:45: And that in itself will open up a lot of new workflows and options that are not available right now.
24:55: They're a little bit there, but not enough for people to really even realize it.
25:02: So the DSP is a big one. Same with the texture one, you'll be able to do procedural textures, which on itself is also really fun to play with.
25:12: But also you can now, once we have those, you'll be able to use Resonite as a production tool.
25:18: Even if you're building a game in Unity or Unreal, you could use Resonite as part of your workflow to produce some of the materials for that game.
25:26: And it gets a lot of the benefits of having it be a social sandbox platform.
25:32: Because, say you're working on a sound effect, or you're working on music, or working on procedural texture, you can invite people in and you can collaborate in real-time.
25:43: That's given thanks to Resonite's architecture, it's just automatic.
25:48: If you have your favorite setup for studio, for working on something, you can just save it into your inventory, send it to somebody, or just load it, or you can publish it and let other people play with your studio setup.
26:02: The DSP part is also going to be a big doorway to lots of new workflows and lots of new ways to use Resonite.
26:15: I'm really excited for that part, and also part of it is I just love audio-visual stuff.
26:21: You wire a few notes, and now you have some cool visuals coming out of it, or some cool audio, and you can mess with it.
26:28: There's another part for the mesh processing.
26:35: You could, for example, have a node where you input a mesh, and on the output you get a sub-subdivided, smoothed out mesh.
26:48: Or maybe it voxelizes, maybe it triangleizes, maybe it applies a boolean filter, or maybe there's some perturbation to the surface.
26:57: And that feature I think will combine with yet another feature that's on the roadmap, which is vertex-based mesh editing.
27:07: Because you'd essentially be able to do a thing where, say you have a simple mesh, and this is what you're editing.
27:20: And then this mesh, this live mesh, I'm actually going to delete this one in the background because they're a bit bad for the contrast.
27:34: So I'm taking this a little bit for this question, but this is one I'm particularly excited for, so I want to go a little bit in-depth on this.
27:45: Okay, that should be better.
27:47: So you're editing this mesh, and then you have your own node setup that's doing whatever processing, and it's making a more complex shape out of it because it's applying a bunch of stuff.
27:58: And you edit one of the vertex and it just runs through the pipeline, your mesh DSP processing pipeline, and computes new output meshed based on this as an input.
28:09: So you move this vertex, and this one maybe does this kind of thing.
28:15: You do this kind of modeling, if you're just a blender, this is what you do with the modifiers, where you can have simple base geometry and have a sub-different surface, and then you're moving vertices around, and it's updating the more complex mesh by processing with modifiers.
28:33: The mesh DSP combined with the vertex editing will allow for a very simple workflow, but one that I feel is even more powerful and flexible, and also will probably add more performance because our processing pipeline is very asynchronous.
28:50: Because when I mess with Blender, one of the things that kind of bugs me is if you use modifiers, it takes a lot of processing, the whole interface essentially lags.
29:02: The way stuff is versioned in Resonite is you will not lag as a whole, but only the thing that's updating will maybe take, say this takes a second to update and I move the vertex, I'll see the result in a second, but I will not lag entirely for a second.
29:17: So that itself I think will combine really well with lots of upcoming features and all sorts of existing features.
29:25: And for me that's just a big part, even just beyond ProtoFlux, it's how I like to design things.
29:36: This way where each system is very general, it does its own thing, but also it has lots of ways to interact with lots of other systems.
29:58: So this should cover it. I'm going to hop back here.
30:05: I went also deep on this particular question, but hopefully that kind of sheds some idea on some of the future things and things I want to do, not just in ProtoFlux, but with other things.
30:26: There we go. Sorry, I'm just settling back in.
30:33: I hope that answers the question in good detail.
30:41: So next question we have...
30:47: Troy Borg is asking, you said you had a side project you wanted to work on when you get done with particle system before starting audio system rework. Are you able to talk about it?
30:57: Yes, so the thing I was kind of thinking about is...
31:05: Essentially I've been playing a lot with Gaussian Splathing recently, and I can actually show you some of the videos.
31:15: Let me bring some of my Splats.
31:19: The only way I can actually show you these is through a video.
31:23: This is one I did very recently, so this is probably one of the best scans.
31:32: You can see if I play this, I almost ask you if this is loaded for you, but it only needs to be loaded for me.
31:40: If you look at this, this is a scan of a fursuit head of a friend who is here in Czech Republic.
31:47: His name is Amju.
31:48: He let me scan his fursuit head.
31:51: I first reconstructed it with a traditional technique, but then I started playing with Gaussian Splat Software.
31:56: I threw the same dataset at it, and the result is incredible.
32:00: If you look at the details of the fur, the technique is capable of capturing the softness of it.
32:09: It just looks surreal.
32:15: That's the easiest way to describe it.
32:19: It gives you an incredible amount of detail, while still being able to render this at interactive frame rates.
32:28: I've been 3D scanning for years.
32:31: I love 3D scanning stuff and making models of things.
32:35: And this technique offers a way to reconstruct things.
32:41: I can actually show you how the result of this looks with traditional photogrammetry.
32:54: So if I bring this, you see this is traditional mesh.
32:58: And it's a perfectly good result. I was really happy with this.
33:03: But there's no softness in the hair. There's artifacts around the fur.
33:10: It gets kind of blob-ish. It loses its softness that the Cassian Splats are able to preserve.
33:18: This is another kind of example.
33:20: I took these photos for this in 2016. That's like 8 years ago now.
33:29: And also, if you just look at the part, it just looks real.
33:35: I'm really impressed with the technique. I've been having a lot of fun with it.
33:41: And on my off-time, I've been looking for ways...
33:53: I'm kind of looking at how it works.
33:56: And the way the Cassian Splats work, it's relatively simple in principle.
34:01: It's like an extension of point cloud, but instead of just tiny points,
34:05: each of the points can be a colorful blob that has fuzzy edges to it,
34:11: and they can have different sizes.
34:12: You can actually see some of the individual splats.
34:16: Some of them are long and stretched. Some of them can be round.
34:19: Some are small, some are big.
34:22: And they can also change color based on which direction you're looking at them from.
34:28: So I've been essentially looking for a way,
34:30: can we integrate the Cassian Splatting rendering into Resonite?
34:35: I'm fairly confident I'll be able to do it at this point.
34:39: I understand it well enough to make an implementation.
34:43: The only problem is I don't really have time to actually commit to it right now
34:46: because I've been focusing on finishing the particle system.
34:49: But the thing I wanted to do is, after I'm done with the particle system,
34:55: mix in a smaller project that's more personal and fun,
35:00: just like a mental health break, pretty much.
35:03: It's something to do this primarily for myself,
35:09: because I want to bring those scans in and showcase them to people.
35:15: I'm still 100% excited, I'll see how things go,
35:18: but I'm itching to do this and doing a little bit of research
35:23: on the weekends and so on to get this going.
35:28: It's something I like to do.
35:33: Also something that a lot of people would appreciate as well,
35:35: because I know there's other people in the community
35:38: who were playing with Splats and they wanted to bring them in.
35:41: I think it would also make Resonite interesting to a lot of people
35:46: who might not even think about it now,
35:47: because it's essentially going to give you a benefit
35:50: to visualize the gas in Splats in a collaborative sandbox environment.
35:55: It might even open up some new doors.
35:58: I'm not 100% decided, but pretty much this is what I've been thinking about.
36:08: Next, Noel64 is asking,
36:09: Are there plans to add Instant Cut options for cameras?
36:13: The current flight from one place to another seeking looks a bit weird with the overloading distances.
36:17: You can already do this, so there's an option.
36:21: I just have it at default, which does have the fly,
36:29: but there's literally a checkbox in my UI,
36:32: Interployed between Anchors.
36:33: If I uncheck that and I click on another,
36:36: like, you know, this is instant,
36:37: I will click over there if I can re-
36:39: No, there's a collider in the way.
36:43: I'm just going to do this.
36:44: I click on it, you know, I'm instantly here.
36:47: So that feature already exists.
36:49: If it kind of helps, I can just, you know, keep this one on
36:52: so it doesn't do the weird fly-through.
36:57: But yes, I hope I have the answers to the question.
37:02: Next, Wicker Dice.
37:04: What would you like a Resonite to be in five years?
37:06: Is there a specific goal or vision?
37:08: So for me, like, the general idea of Resonite is
37:16: it's kind of hard to put it in words sometimes
37:18: because in a way that would be a good way to communicate
37:22: but it's almost like, it's like a layer.
37:29: It's like a layer where you have certain guarantees.
37:33: You're guaranteed that everything is real-time synced.
37:36: Everything is real-time collaborative.
37:38: Everything is real-time editable.
37:43: You have integrations with different hardware.
37:45: You have persistence.
37:47: You can save anything, whether it's locally or through cloud,
37:51: but everything can be persisted.
37:55: And what I really want Resonite to be is
37:58: be this layer for lots of different workflows
38:03: and for lots of different applications.
38:06: The earlier one is social VR, where you hang out with friends,
38:10: you're watching videos together,
38:13: you're playing games together,
38:16: you're just chatting or doing whatever you want to do.
38:21: But if you think about it, all of it is possible
38:23: thanks to this baseline layer.
38:26: But there's also other things you can do
38:27: which also benefit from that social effect.
38:30: And it kind of ties into what I've been talking about earlier,
38:34: which has to do with using Resonite as a work tool,
38:37: as part of your pipeline.
38:39: Because if you want to be working on music,
38:43: if you want to be making art,
38:45: if you want to be doing some designing and planning,
38:49: you still benefit from all these aspects of the software,
38:53: being able to collaborate in real time.
38:56: If I'm working on something and showing something,
38:59: you immediately see the results of it.
39:03: You can modify it and can build your own applications on it.
39:07: People, given Resonite's nature, can build their own tools.
39:12: And then share those tools with other people as well, if you want to.
39:17: So for me, what I really want Resonite to be
39:20: is a foundation for lots of different applications
39:23: that goes beyond just social VR,
39:27: but which enriches
39:29: pretty much whatever task you want to imagine
39:34: with that social VR,
39:37: with the real-time collaboration,
39:39: and persistence, and networking, and that kind of aspect.
39:42: Think of it as something like Unity,
39:45: or your own Unreal, because those engines,
39:47: or Godwell, I shouldn't forget that one,
39:51: these engines,
39:54: they're maybe primarily designed for building games,
39:56: but people do lots of different stuff with them.
39:59: They build scientific visualization applications,
40:03: medical training applications,
40:05: some people build actual just utilities with them.
40:11: They're very general tools,
40:13: which solve some problems for you,
40:16: so you don't have to worry about
40:17: low-level graphics programming in a lot of cases,
40:19: you don't have to worry about having
40:21: a basic kind of functional engine.
40:24: You kind of get those for free, in quotes.
40:29: In a sense, you don't need to spend time on them,
40:31: that's already provided for you.
40:33: And you can focus more on what your actual application is,
40:36: whether it's a game, whether it's a tool,
40:38: whether it's a research application,
40:41: whatever you want to build.
40:43: I want the Resonite to do the same, but go a level further,
40:46: where instead of just providing the engine,
40:49: you get all the things I mentioned earlier.
40:51: You get real-time collaboration.
40:53: Whatever you build, it supports real-time collaboration.
40:57: It supports persistence, you can save it.
41:00: You already have integrations with lots of different hardware,
41:03: interactions like grabbing things, that's just given.
41:07: You don't have to worry about that,
41:09: you can build your applications around that.
41:13: I want the Resonite to be almost the next level,
41:21: beyond game engines.
41:25: Another kind of analogy I use for this is,
41:28: if you look at early compute thing,
41:32: when computers were big and room-scaled,
41:36: the way they had to be programmed is with punch cards, for example.
41:39: I don't know if that was the very first method,
41:41: but it's one of the earliest.
41:43: And it's very difficult because you have to write your program,
41:47: and you have to translate it in the individual numbers on the punch card,
41:51: and then later on there came assembly programming languages.
41:55: And those made it easier,
41:57: they let you do more in less time,
42:01: but it was still like, you have to think about managing your memory,
42:04: managing your stack.
42:06: You need to decompose complex tasks into these primitive instructions,
42:11: and it still takes a lot of mental effort.
42:14: And then later on came higher-level programming languages.
42:18: I'm kind of skipping a lot, but say C, C++, C-sharp,
42:24: and languages like Python.
42:26: And they added further abstractions where, for example,
42:29: of its modern C and C++,
42:33: you don't have to worry about memory management as much,
42:35: at least not managing your stack.
42:38: And now some of the things you have to worry about,
42:42: they're automatically managed.
42:44: You don't even have to think about them.
42:46: You can just focus my function, accept these values,
42:49: output this value, and it generates appropriate stack management code for you.
42:57: And then came tools built with those languages,
43:00: like I mentioned, Unity or Unreal,
43:02: where you don't have to worry about, or Godot,
43:05: where you don't have to worry about having the game engine,
43:09: being able to render stuff on screen.
43:10: That's already provided with you.
43:13: And with Resonite,
43:15: the goal is to essentially move even further along this kind of progression
43:19: to make it where you don't have to worry about the networking aspect,
43:23: the persistence aspect, integrations with hardware,
43:26: you're just given that,
43:28: and you can focus more of your time
43:30: on what you actually want to build in that kind of environment.
43:34: So that's pretty much,
43:36: that's the big vision I have on my end
43:39: for what I want Resonite to be.
43:43: I think Easton is asking,
43:46: what are your thoughts on putting arrows on generic type wires?
43:54: I'm not actually sure if I fully understand that one.
43:58: I don't know what you mean, generic type wires.
44:01: Do you mean wires that are of the type type?
44:06: I probably need a clarification for this one.
44:11: Sorry, I can't...
44:13: I don't know how to interpret the particle question, so I'll...
44:17: Oh, he's asking arrows and wires.
44:24: I think the impulse ones actually have arrows.
44:30: I'm not really sure, I probably need to see an image or something.
44:35: Next, zitjustzit is asking,
44:37: select boxes of code that click inputs and give outputs,
44:39: allowing for coding interface with flux without having to build some parts of the function using the nodes.
44:44: Yes.
44:47: Your protoflux node becomes a function that other systems can call
44:53: without even needing to know its protoflux.
44:57: They're just like, I'm going to give you these values and I expect this value as the output,
45:02: and if your node matches that pattern, then you can give it to those other systems.
45:10: Next question, Treyborg, that does sound amazing,
45:14: is that something for After Sauce custom protoflux nodes?
45:18: So, it's not related to Sauce, that's fully Froox Engine side,
45:22: so technically it doesn't matter whether it happens before or Sauce,
45:28: it's not dependent on it in any way.
45:32: There is a part that is, which is having custom shader support,
45:35: which you do want to do with protoflux, that one does require switch to Sauce,
45:41: because with Unity, the options to do custom shaders are very limited,
45:47: and very kind of hacky.
45:50: So, that one will probably wait, but for the parts I was talking about earlier,
45:56: those will happen regardless of when Sauce comes in.
45:59: It might happen after Sauce comes in, it might happen before it comes in,
46:03: but this is just purely how the timing ends up working out,
46:07: and how the prioritization ends up working out.
46:10: Next question, ShadowX, I'm just checking time.
46:14: ShadowX, with nested nodes, will custom nodes be able to auto-update
46:17: when source template for Scrib node is changed?
46:20: Yes, there are multiple ways to interpret this as well,
46:24: but if you have a template, and you have it used in lots of places,
46:28: if you change the internals of the node, every single instance is going to be reflected.
46:33: So you can actually have it used in lots of objects in the scene,
46:40: and you need to change something about its internals,
46:42: everything is going to be reflected in the scene.
46:45: The other interpretation is if you make a library of nodes,
46:51: and say you reference that in your world,
46:54: and the author of the library publishes an updated version,
46:57: is that going to auto-update other worlds, which do use that library?
47:04: That would be handled by the Molecule system,
47:08: which is our planned system for versioning,
47:12: and we want to use it not just for Resonite itself,
47:15: but also for ProtoFlux, so you can publish your library functions and so on.
47:20: And with that, what we do is let you define rules on when to auto-update and what not.
47:27: We probably follow something like semantic versioning,
47:30: so if it's a minor update, it auto-updates unless you disable that as well.
47:36: If it's a major update, it's not going to auto-update unless you specifically ask it to.
47:42: So that's going to be the other part of it.
47:44: That one's definitely going to give you more of a choice.
47:51: Next question, TroyBorg.
47:53: So could I have something like BubbleNode that has all the code floating around randomly,
47:58: the random lifetime on it?
48:01: I'm not really sure what you mean, BubbleNode,
48:03: but pretty much you can package all the code for whatever you want the bubble to do in that node,
48:10: and it doesn't need to be, for example, my bubbles.
48:16: Oh, that didn't work.
48:20: There we go, like this one.
48:21: You see, I have this bubble, and this bubble, it has code on it that handles it flying around,
48:32: and right now, when I make the bubble, it actually duplicates all the code for the bubble,
48:36: which means the ProtoFlux VM needs to compile all this code.
48:40: It's relatively simple, so it's not as much, but still, it adds up,
48:44: especially if you had hundreds of these, or thousands.
48:49: With the nested nodes, all this bubble will need to do is reference that template,
48:54: and only need one of it, which means it's just going to reuse the same compiled instance of the ParticleNode
49:01: instead of duplicating literally the entire thing on each of the objects you make independently.
49:11: Next, a question from Toiberg.
49:16: For processing textures, you could do stuff like level curves adjustments,
49:21: like in Blender Classroom, the texture for Albedo, they adjust it with levels,
49:25: and grayscale then plug into a heightmap instead of separate texture.
49:32: I cannot fully understand how to map this question to Resonite,
49:36: because I'm not familiar with Blender Classroom,
49:40: but you could define your own procedural texture and then use it in other stuff.
49:46: The procedural texture, it will end up as actual bitmap.
49:49: It's going to run your code to generate the texture data,
49:53: upload it to the GPU, and at that point it's just the normal texture.
50:00: But you're able to do stuff like that, or at least look similar.
50:03: Next question, Dusty Sprinkles. I'm also just checking how many of these.
50:07: Fair bit of questions. Dusty Sprinkles is asking,
50:13: When we get custom nodes, do you think we'll be able to create our own default node UI?
50:18: I could see using AudioDSP for custom nodes to make DOS.
50:23: So the custom UI for the ProtoFlux nodes that's completely independent from custom nodes.
50:31: It's something we could also offer, but it's pretty much a completely separate feature.
50:37: Because the generation of the node is technically also the main ProtoFlux itself.
50:43: It's the UI to interface with the ProtoFlux.
50:47: We could add mechanisms to be able to do custom nodes.
50:51: There are some parts of that that I'm a little bit careful with,
50:55: because usually you can have hundreds or thousands of nodes,
50:59: and having customizable systems can end up like a performance concern.
51:05: Because the customization, depending on how it's done,
51:09: it can add a certain amount of overhead.
51:12: But it's not something we're completely close to,
51:15: it's just like we're probably going to approach it more carefully.
51:19: It's not going to come as part of custom nodes, though.
51:22: Those are independent.
51:27: BlueCyro, oh, Cyro unfortunately is late,
51:30: but I don't know if I can bring out it easier,
51:34: because I already have this set up without, I'm sorry.
51:41: Jay Wyden is asking internally how prepared is the engine
51:45: to take full advantage of modern net pass to JIT.
51:48: There's been lots of things since framework, like spans,
51:50: to avoid ologs and unsafe methods, think bitcasting,
51:53: that can make things way, way, way faster.
51:56: Other areas where we use the new features in the headless client
51:59: through pre-processed directives or something.
52:02: We do use a number of features that are backported to older versions,
52:08: like you mentioned spans, using stack allocations and so on,
52:13: and we expect those to get a performance uplift
52:17: with more than runtime because those are specifically optimized for it.
52:21: So there's parts of the engine, especially anything newer,
52:25: like we've been trying to use the more modern mechanisms where possible.
52:31: There are bits where we cannot really use the mechanisms,
52:34: or if we did, it would actually be detrimental right now.
52:38: There's certain things like, for example, vectors library.
52:45: That one, with modern net, it runs way faster,
52:48: but if we used it right now, we would actually run way slower
52:52: because with the unit as mono, it just doesn't run well.
52:59: There's certain things which, if we did right now,
53:04: would essentially end up hurting performance until we make the switch,
53:08: but we tend to use a different approach that's not as optimal for the more modern net,
53:12: but is more optimal now.
53:15: So there may be some places, like in code, you can see that,
53:18: but where possible, we try to use the modern mechanisms.
53:21: There's also some things which we cannot use just because they don't exist
53:25: with the version of net, and there's no way to backport them.
53:27: For example, using SMI, the Intrinsics, to accelerate a lot of the math.
53:34: That's just not supported under all the versions, and there's no way to backport it,
53:38: so we cannot really use those mechanisms.
53:43: Once we do make the switch, we expect a pretty substantial performance improvement,
53:49: but part of why we want to do the switch, especially as one of the first tasks towards performance,
53:56: is because it'll let us use all the mechanisms going forward.
54:02: If we build new systems, we can take full advantage of all the mechanisms that modern net offers,
54:09: and we can, over time, also upgrade the old ones to get even more performance out of it.
54:16: Overall, I can expect big performance, even for the parts that are not most prepared for it,
54:21: just because the code generation quality is way higher,
54:32: but we can essentially take the time to get even more performance by switching some of the approaches,
54:42: the more performance ones.
54:45: By making the switch, then all itself is going to be a big performance boost, but that's not the end of it.
54:50: It also opens doors for doing even more, like following that.
55:05: Splats, you kind of have to implement the support for yourself.
55:09: The output is in a mesh.
55:11: The idea of Gaussian splatting is you're essentially using a different type of primitive to represent your...
55:20: I don't want to say mesh, because it's not a mesh.
55:24: It essentially represents your model, your scene or whatever you're going to show.
55:30: Instead of a triangle, vertices and triangles that we have in traditional geometry,
55:34: it's completely composed from Gaussians, which are a different type of primitive.
55:41: You can actually use meshes to render those.
55:45: From what I've looked, there's multiple ways to do the rendering.
55:51: One of the approaches for it is you essentially include the splat data,
56:01: and then you just render this. You use the typical GPU rasterization pipeline to render them as quads,
56:08: and then the shader does the sampling of spherical harmonics so it can change color based on the angle you look at it from and other stuff.
56:16: There's other approaches that implement the actual rasterization in compute shader,
56:23: and this can lead to more efficient ones, and at that point you're not using traditional geometry,
56:29: but the approach is kind of varied, there's lots of ways to implement them.
56:34: Next question, Troy Borg. Is that something you need special software to create them,
56:38: or is it just something in Resonite so it knows how to render that new format for a 3D scan?
56:43: We essentially need a code to render them. The gist of it, the way you approach it,
56:51: is you get your data set, your Gaussian splat, it's essentially points with lots of extra data.
56:59: You have the size of it, and then the color is encoded using something called spherical harmonics.
57:06: That's essentially a mathematical way to efficiently encode information on the surface of a sphere,
57:13: which means you can sample it based on the direction.
57:17: If you consider a sphere... I should've grabbed my brush.
57:24: I'm gonna grab a new one, because I'm too lazy to go over there.
57:33: Brushes... let's see... I'm not gonna go over there, because it's just a simple doodle.
57:38: Say you have this sphere in 2D, it's a circle.
57:45: And if it's a unit sphere, then each point on the sphere is essentially a direction.
57:50: So if you have information encoded on the surface of your sphere, then each point...
57:56: If I'm the observer here, so this is my eye, and I'm looking at this,
58:02: then the direction is essentially the point from the center of the sphere towards the observer.
58:08: I use this direction to sample the function, and I get a unique color for this particle direction.
58:13: If I look at it from this direction, then I get this direction, and I sample the color here,
58:22: and this color can be different than this color.
58:24: And this is a way that the Gaussians, plus they were really good at encoding stuff like reflections for example,
58:31: because with reflection, the color on the point, it literally changes based on the angle you look at it from.
58:43: And it's the spherical harmonics that actually take the bulk of the data for a Gaussian spot,
58:49: because from what I've seen, they use third-order spherical harmonics,
58:55: which means for each point you actually have 16 colors, which is quite a lot.
59:00: And a lot of the work is how do you compress that in a way that the GPU can decode very fast on the fly,
59:09: and eat all your VRM.
59:13: But essentially, you write your code, to answer the question more radically,
59:17: you write your code to encode it properly, and then render it as efficiently as you can.
59:24: And you can utilize some of the existing rasterization pipelines as well, to save you some time.
59:31: Zita Zit is asking, I don't have a good understanding of splats, but aren't they essentially particles?
59:36: So I kind of went over this a few times, they're not particles in the sense of a particle system,
59:41: there's some overlap, because each splat is a point, but it has a lot of additional data to it,
59:47: and it's also not a tiny small particle, but it can be variously sized color blob.
59:56: Next question, does this bring calls? So it kind of makes pause with any 3D scans, I don't really get them, but they're neat.
01:00:02: So the dataset I use for mine is essentially just photos, it's the same approach I use for traditional.
01:00:21: There's different ways to make them, but the most common one that I've seen is you usually just take lots of photos,
01:00:29: you use traditional approach, the photos get aligned in a space, and then you sort of estimate the depth.
01:00:39: Like with traditional 3D construction, except for the splats, it doesn't really estimate the depth.
01:00:44: The way I've seen it done in the software I use is it starts with a sparse point cloud that's made from the type points from the photos,
01:00:52: it's essentially points in space that are shared between the photos, and it generates splats from those.
01:01:00: And the way it does it is, I believe it uses gradient descent, which is a form of machine learning,
01:01:07: where each of the splats is actually taught how it should look so it matches your input images.
01:01:15: So that's usually the longest part of your reconstruction process, because it has to go through a lot of training,
01:01:23: like I said, post shot, it runs usually several dozen thousand training steps,
01:01:38: and usually in the beginning you can see the splats are very fuzzy and they're just moving around,
01:01:43: and they're settling into space and getting more detail, and it also adds more splats in between them where it needs to add more detail.
01:01:51: So there's like a whole kind of training process to it.
01:01:56: I actually have a video I can show you, because there's also a relevant question I can see.
01:02:04: So I'm gonna...
01:02:08: Because ShadowX is asking, does all common splatting software encode spherical harmonics?
01:02:14: I never noticed color changes in my scans in a scanning verse and post shot.
01:02:17: So I know for sure post shot does it, I don't know about scanning verse because I don't use that one.
01:02:22: It's possible they simplify it, because I've seen some implementations of Gaussian splats,
01:02:27: they just throw away the spherical harmonics and encode a single color, which saves tons of space,
01:02:31: but you also lose one of the big benefits of them.
01:02:37: But I can tell you, post shot definitely does it, and I have a video that showcases that pretty well.
01:02:47: So this is a kind of figment that I did for sure, and I reprocessed this with Gaussian splatting,
01:02:55: and watch the reflections on the surface of the statue.
01:03:00: You can see how they change based on where I'm looking from, it's kind of subtle.
01:03:05: If I look at the top, there's actually another interesting thing, and I have another video for this,
01:03:10: but Gaussian splatting is very useful if you have good coverage from all possible angles,
01:03:18: because the way the scanning process works, like I mentioned earlier,
01:03:22: they are trained to reproduce your input images as close as possible,
01:03:27: which means for all the areas where you have photo coverage from, they look generally great.
01:03:34: But if you move too far away from them, like in this case for example from the top,
01:03:38: I was not able to take any pictures from the top of it, it actually kind of like,
01:03:45: they start going a bit funky. Do you see how it's all kind of fuzzy and the color is blotchy?
01:03:56: And that's kind of like, for one it shows, it does encode the color based on the direction,
01:04:05: but it also shows one of the downsides of it, because I have another video here.
01:04:12: So this is a scan of, this is like, I don't even know how long ago, this was like over six years ago,
01:04:24: when my family from Ukraine, they were visiting over because my grandma, she was Ukrainian,
01:04:31: and they made borscht, which is like another kind of traditional kind of like foods,
01:04:36: and I wanted to scan it, but I literally didn't have time because they put it on disk,
01:04:40: I was like, I'm gonna scan it, and I was only able to take three photos before they started moving things around.
01:04:46: But it actually made for an interesting scan, because I was like, how much can I get out of three photos?
01:04:52: And in the first part, this is traditional scan with a mesh surface that's done with, you know,
01:04:58: with I guess a metal shape. Oh, I already switched to the other one.
01:05:02: So you see, all the reflections, they're kind of like, you know, baked in.
01:05:05: It doesn't actually look like metal anymore.
01:05:09: There's, you know, lots of parts missing because literally they were not scanned,
01:05:13: but the software was able to estimate the surface.
01:05:18: It knows this is a straight surface. If I look at it from an angle,
01:05:21: apart from the missing parts, it's still coherent.
01:05:24: It still holds shape.
01:05:27: With Gaussian Splathing, it doesn't necessarily reconstruct the actual shape.
01:05:32: It's just trying to look correct from angles, and you'll be able to see that in a moment.
01:05:37: So this is the Gaussian Splat, and you see it looks correct,
01:05:40: and the moment I move it, it just disintegrates.
01:05:43: Like, you see, it's just a jumble of, you know, colorful points,
01:05:46: and it's because all the views that I had, they're like relatively close to each other,
01:05:51: and for those views, it looks correct because it was trying to look correct,
01:05:56: and because there's no cameras, you know, from the other angles,
01:05:59: the Gaussians are free to do, you know, just whatever.
01:06:02: Like, they don't have anything to constrain them, you know, to look particle way,
01:06:06: so it just ends up a jumble.
01:06:08: And that's a very kind of, to me, that's a very kind of interesting way
01:06:13: to visualize the differences between the scanning techniques.
01:06:18: But yeah, that's kind of what we did, like answer to, yes, they do encode the spherical harmonics,
01:06:23: and you can make it like, you know, pretty much with any scans,
01:06:25: but the quality of the scan is going to depend, you know, on your data set.
01:06:31: And I'll be kind of throwing, because I have like, terabytes of like, you know, 3D scans,
01:06:35: I'll be just throwing everything, you know, the software and seeing what it like ends up producing.
01:06:41: I know there's also other ways, there's like, you know, some software,
01:06:44: let's just double check in time, there's also, you know, some software that just generates it,
01:06:52: like, you know, with AI and stuff, but like, I don't know super much about that.
01:06:56: So there's like other ways to do them, but I'm mostly familiar with the one, you know, with,
01:07:02: I'm mostly familiar with, you know, like using photos as a dataset.
01:07:09: Next question, jviden4, ProtoFlux is a VM, it compiles things.
01:07:15: So, technically, VM and compiling, those are like two separate things.
01:07:19: Also, Epic is asking what is a ProtoFlux VM, so I'm going to just combine those two questions into one.
01:07:27: So yes, it is a VM, which essentially means it's sort of like, you know, it has a defined sort of like runtime.
01:07:33: Like it's a technically stack-based VM.
01:07:38: It's a, how do I put it, it's essentially sort of like an environment where the code of the nodes, you know,
01:07:45: it knows how to work with a particle environment that sort of isolates it, you know, from everything else.
01:07:55: It sort of compiles things, it's sort of like a halfway step.
01:07:59: It doesn't, it doesn't directly produce machine code from actual code.
01:08:06: The actual code, you know, of the individual nodes that ends up being a machine node for the node itself,
01:08:11: by the way, it kind of operates with the VM.
01:08:15: What ProtoFlux does, it builds something called execution lists and evaluation lists.
01:08:20: It's kind of like, you know, a sequence of nodes or sequence of impulses.
01:08:23: It's going to look at it and be like, okay, this executes, then this executes, then this executes,
01:08:27: and builds a list, and then during execution it already has the pre-built list,
01:08:32: as well as, like, it resolves things like, you know, stack allocation.
01:08:36: It's like, okay, this node needs to use this variable and this node needs to use this variable.
01:08:39: I'm going to allocate this space on the stack and, you know,
01:08:42: and I'm going to give these nodes the corresponding offsets so they can, you know,
01:08:48: get to and from the stack.
01:08:53: So it's kind of like, you know, it does a lot of kind of like the building process.
01:08:55: It doesn't end up as full machine code.
01:08:57: So like, I would say it's sort of like a halfway step towards compilation.
01:09:02: Eventually we might consider like, you know, doing sort of a JIT compilation where it actually
01:09:06: makes, you know, full machine code for the whole thing,
01:09:10: which could help like improve the performance of it as well.
01:09:13: But right now it's, it is a VM
01:09:18: that we will sort of halfway compilation step to kind of speed things up.
01:09:22: It also like does it like, you know, validate certain things.
01:09:25: Like, for example, you have like, you know, infinite kind of continuation loops,
01:09:29: like certain things are essentially like illegal.
01:09:33: Like you cannot have those be a valid program,
01:09:37: which kind of helps avoid, you know, some,
01:09:40: having some kind of issues where we have to figure out certain problems at runtime.
01:09:47: But in short, like the ProtoFlux VM, it's like a way for,
01:09:50: you know, the ProtoFlux to essentially do its job. It's like an environment, execution environment,
01:09:57: you know, that defines how it kind of works
01:10:00: and then all the nodes can operate within that environment.
01:10:05: Next question, Nitra is asking, is the current plan to move the graphical client to .NET 9
01:10:10: architecture before moving to Sauce? Yes, so we are currently,
01:10:16: I might actually just do, I've done it on the first one, but since I have a little bit better setup,
01:10:21: I might do it again just to get like, you know, a better view. Let me actually get up for this one.
01:10:26: I'm going to move over here. There we go.
01:10:31: So I'm going to move over here. I already have my brush that I forgot earlier.
01:10:34: I'm going to clean up all this stuff.
01:10:41: Let's move this to give you like a gist
01:10:44: of like the performance update. There we go. Clean all this up.
01:10:50: I'm not going to hit all. Grab my brush.
01:10:55: There we go. So right now, a little bit more.
01:10:58: Right now.
01:11:03: So the way like the situation is right now.
01:11:12: So imagine
01:11:13: this is Unity. I'm actually going to write it here.
01:11:20: And this is within the Unity,
01:11:23: we have FrooxEngine.
01:11:28: So this is FrooxEngine.
01:11:34: So Unity, you know,
01:11:39: it has its own stuff.
01:11:43: And with FrooxEngine, most things in FrooxEngine,
01:11:47: they're actually fully contained within FrooxEngine. So there's like lots of systems
01:11:52: just going to throw them as little boxes. And they're all kind of like fully contained. Unity has no idea
01:11:58: that they even exist. But then there's,
01:12:02: right now there's two systems which are sort of shared between the two.
01:12:09: There's a particle system.
01:12:15: And then there's the audio system.
01:12:23: So those two systems, they're essentially a hybrid.
01:12:28: Where FrooxEngine doesn't work and Unity doesn't work. And they're very kind of
01:12:32: intertwined. There's another part when FrooxEngine communicates with
01:12:37: Unity, there's other bits. There's also lots of
01:12:42: little connections between things
01:12:46: that tie the two together.
01:12:51: And the problem is, Unity uses something called Mono, which is a runtime,
01:12:57: it's also actually like a VM, like the ProtoFlux VM but different. But essentially it's responsible
01:13:02: for taking our code and running it. Translating it into instructions for
01:13:07: CPU, providing all the base library
01:13:13: implementations and so on. And the problem is,
01:13:17: the version that Unity uses, it's very old and it's very slow.
01:13:22: And because all of the FrooxEngine is running inside of it,
01:13:27: that makes all of this slow as well.
01:13:34: So what the plan is, in order to get a big performance update,
01:13:39: is first we need to simplify, we need to
01:13:43: disentangle the few bits of Froox Engine from Unity as much as possible.
01:13:49: The part I've been working on is the particle system.
01:13:56: I'll probably start this thing next week. It's called PhotonDust,
01:14:01: it's our new in-house particle system, and the reason we're doing it
01:14:06: is so we can actually take this whole bit...
01:14:11: I might just redraw it. I wanted to make a nice visual part
01:14:16: but it's not cooperating.
01:14:20: I'm just going to do this,
01:14:23: and then I'll do this, and just particle system, audio system.
01:14:27: So what we do, we essentially replace this with this. We make it fully contained
01:14:33: inside of Froox Engine. Once that is done, we're going to do the same for the audio engine,
01:14:38: so it's going to be also fully contained here, which means we don't have ties here.
01:14:42: And then this part, instead of lots of little wires,
01:14:48: we're going to rework this. So all the communication
01:14:52: with Unity happens via a very nicely defined
01:14:57: package, where it sends the data, and then the system
01:15:02: will do whatever here. But the tie to Unity is now
01:15:12: rendered, and some stuff that needs to come back is sent over a very well-defined
01:15:17: interface that can be communicated over some kind of
01:15:22: inter-process communication mechanism. Probably a combination of
01:15:27: a shared memory and some pipe mechanism.
01:15:33: Once this is done, what we'll be able to do, we could actually take Froox Engine
01:15:37: and take this whole thing out, if I can't even grab the whole thing, it's being unwieldy.
01:15:44: Just pretend this is smoother than it is.
01:15:48: They'll take it out into its own process.
01:15:52: And because we now control that process, instead of being abandoned with Unity,
01:15:57: we can use .NET 9.
01:16:03: And this part, this is majority
01:16:06: of where time is spent running, except when it comes to rendering,
01:16:11: which is the Unity part. Which means, because we'll be able to run with .NET 9,
01:16:17: we'll get a huge performance boost.
01:16:21: And the way we know we're going to get a significant performance boost is because we've already done this
01:16:25: for our headless client. That was the first part of this performance work,
01:16:31: is move the headless client to use .NET 8, which now is .NET 9 because they're released in version.
01:16:38: The reason we wanted to do headless first
01:16:40: is because headless already exists outside of Unity, it's not tied to it.
01:16:45: So it was much easier to do this for headless than doing this for a graphical client.
01:16:50: And headless, it pretty much shares most of this.
01:16:54: Most of the code that's doing the heavy processing is in the headless, same as on the graphical client.
01:17:01: When we made the switch and we had the community start hosting events
01:17:05: with the .NET 8 headless, we noticed a huge performance boost.
01:17:09: There's been sessions, like for example the Grand Oasis karaoke.
01:17:15: I remember their headless used to struggle.
01:17:19: When it was getting around 25 people, the FPS of the headless would be dropping down
01:17:25: and the session would be degrading.
01:17:27: With the .NET 8, they've been able to host sessions which had, I think at the peak, 44 users.
01:17:34: And all the users, all of their IK, all the dynamic bolts, all the ProtoFlux,
01:17:40: everything that's been computed on graphical client, it was being computed on headless,
01:17:46: minus obviously rendering stuff.
01:17:49: And the headless was able to maintain 60 frames per second with 44 users.
01:17:55: Which is like an order of magnitude improvement over running with mono.
01:18:06: So, doing it for headless first, that sort of let us gauge how much of a performance improvement will this switch make.
01:18:14: And whether it's worth it, do the separation as early as possible.
01:18:19: And based on the data, it's pretty much like, I feel like it's very, very worth it.
01:18:32: Where you can pull Froox Engine out of Unity, run it with a 9.
01:18:37: And then the communication will just do, instead of the communication happening within the process,
01:18:43: it's going to pretty much happen the same way except across process boundary.
01:18:52: The other benefit of this is, how do we align this?
01:18:56: Because even when we do this, once we reach this point, we'll still want to get rid of Unity for a number of reasons.
01:19:05: One of those is being like custom shaders. Those are really, really difficult to do with Unity,
01:19:10: at least making them real-time and making them support backwards compatibility,
01:19:15: making sure the content doesn't break, stuff like that.
01:19:18: Being able to use more efficient rendering methods.
01:19:21: Instead of having to rely on the third, we'll be able to use cluster forward,
01:19:29: which can handle lots of different shaders with lots of lines.
01:19:34: We'll want to get rid of Unity as well, and this whole thing,
01:19:38: where the communication between FrooxEngine, which does all the computations,
01:19:44: and then sends stuff like, please render this stuff for me.
01:19:48: Because this process makes this a little more defined,
01:19:51: we can essentially take the whole Unity, just gonna yeet it away,
01:20:00: and then we'll plug in Sauce instead.
01:20:08: So Sauce is gonna have its own things, and inside Sauce is actually gonna be,
01:20:12: right now it's being built on the Bevy rendering engine.
01:20:17: So I'm just gonna put it there, and the communication is gonna happen
01:20:20: pretty much the same way, and this is gonna do whatever.
01:20:25: So we can snip Unity out, and replace it with Sauce.
01:20:30: There's probably gonna be some minor modifications to this,
01:20:32: how it communicates, so we can build around the new features of Sauce and so on.
01:20:37: But the principle of it, by moving FrooxEngine out,
01:20:41: by making everything neatly combined, making a neat communication method,
01:20:49: is the next step.
01:20:52: There's actually the latest thing from the development of Sauce,
01:20:55: there was actually a decision made that Sauce is probably not gonna have
01:20:58: any C-Sharp parts at all, it's gonna be purely Rust-based,
01:21:02: which means it doesn't even need to worry about .NET 9 or C-Sharp Interop,
01:21:10: because its responsibility is gonna be rendering whatever FrooxEngine sends it,
01:21:15: and then maybe sending some methods back,
01:21:19: where it needs to be by the actual communication to sync stuff up.
01:21:24: But all the actual world model, all the interaction that's gonna be fully contained
01:21:31: in FrooxEngine external to Sauce.
01:21:35: Then on itself, there's gonna be a big upgrade,
01:21:38: because it's gonna be much more modern-day engine,
01:21:41: we'll be able to do things like the custom shaders, like I was mentioning.
01:21:45: There's some potential benefits to this as well,
01:21:47: because the multi-process architecture is inspired by Chrome and Firefox,
01:21:53: which do the same thing, where your web browser is actually running multiple processes.
01:22:01: One of the benefits that adds is sandboxing,
01:22:04: because once this is kind of done, we'll probably do the big move like this,
01:22:09: and at some point later in the future, we'll split this even into more processes,
01:22:15: your host can be its own process, also .NET 9, or whatever the .NET version is.
01:22:22: So this can be one world, this can be another world,
01:22:26: and these will communicate with this, and this will communicate with this.
01:22:30: And the benefit is if a word crashes, it's not gonna bring the entire thing down.
01:22:36: It's the same thing in a web browser.
01:22:38: If you ever had your browser tab crash, this is a similar principle.
01:22:43: It crashes just the tab instead of crashing the whole thing.
01:22:47: Similar thing, we might be able to, I'm not promising this right now,
01:22:50: but we might be able to design this in a way where if the renderer crashes,
01:22:55: we'll just relaunch it. You'll still stay in the world that you're in,
01:22:58: your visuals are just gonna go away for a bit and then gonna come back.
01:23:01: So we can reboot this whole part without bringing this whole thing down.
01:23:05: And of course, if this part comes down, then it's over, and you have to restart.
01:23:11: But by splitting into more modules, you essentially eliminate the possibility of crashing
01:23:19: because this part will eventually be doing relatively little.
01:23:22: It's just gonna be coordinating the different processes.
01:23:26: But for the first part, we're just gonna move Froox Engine into a separate process out of Unity.
01:23:33: That's gonna give us big benefit thanks to .NET 9.
01:23:37: There's other benefits because, for example, Unity's garbage collector is very slow
01:23:42: and very CPU heavy, but .NET 9 has way more performance as well.
01:23:48: We'll be able to utilize new performance benefits of .NET 9 in the code itself
01:23:52: because we'll be able to start using new functions within Froox Engine
01:23:56: because now we don't have to worry about what Unity supports.
01:24:01: Following that, the next big step is probably gonna be to switch to Sauce.
01:24:07: So we're gonna replace Unity with Sauce, and at some point in the future
01:24:10: we'll do more splitting for Froox Engine into more separate processes to improve stability.
01:24:17: And also add sandboxing, because once you do this, you can sandbox this whole process
01:24:23: using the operating system sandboxing primitives, which will improve security.
01:24:30: So that's the general overall plan, what we want to do with the architecture of the whole system.
01:24:38: I've been reading a lot about how Chrome and Firefox did it, and Firefox actually did a similar thing
01:24:43: where there used to be a monolithic process, and then they started doing work to break it down into less processes
01:24:50: and eventually they did just two processes, and then they broke it down into even more.
01:24:55: And we're essentially gonna be doing a similar thing there.
01:24:58: So I hope this answers it, gives you a better idea of what we want to do with performance for Resonite
01:25:08: and what are the major steps that we need to take, and also explains why we are actually reworking the particle system and audio system.
01:25:18: Because on the surface, it might seem, you know, there's, like, why we're reworking the particle and audio system
01:25:26: when we want, you know, more performance, and the reason is, you know, just so we can kind of show them
01:25:32: fully into Froox Engine, make them kind of mostly independent, like, you know, of Unity,
01:25:37: and then we can pull the Froox Engine out, and that's the major reason we're doing it.
01:25:41: The other part is, you know, so we have our own system that we kind of control, because once we also switch Unity for Sauce,
01:25:49: if the particle system was still in Unity, Sauce would have to reimplement it, and it would also complicate this whole part.
01:25:56: Because, like, now we have to, like, synchronize this particle system with all the details of the particle system on this end.
01:26:05: So, that's another benefit. But, there's also some actual performance benefit, even just from the new particle system.
01:26:15: Because the new particle system is designed to be asynchronous.
01:26:19: Which means if you do something really heavy, you're only going to see the particle system, like, and you will not like as much.
01:26:25: Because the particle system, if it doesn't finish its computations within a specific time, it's just going to skip and render the previous state.
01:26:37: And the particle system itself will lag, but you won't lag as much. So that should help improve your overall framerate as well.
01:26:44: So, that's pretty much the gist of it. The particle system is almost done. We'll probably start testing this upcoming week.
01:26:55: The audio system, that's going to be the next thing. After that, it's going to be the interface with Unity.
01:26:59: Once that is done, then the pull happens into the separate process, which is going to be relatively simple process at that point.
01:27:06: Because everything will be in place for the pullout from Unity to happen.
01:27:13: So, hopefully this gives you a much better idea.
01:27:17: And if you have any questions about it, feel free to ask. We're always happy to clarify how this is going to work.
01:27:24: I'm going to go back. Boink. There we go.
01:27:31: We're going to sit down. There we go.
01:27:36: So that was another of those kind of rabbit hole questions. I kind of did this explanation on my first episode.
01:27:43: But I kind of wanted to do it because I have an ultimate better setup with the right things.
01:27:48: So we can also make a clip out of it so we have something to refer people to.
01:27:54: But that's the answer to Nitro's question. I'm also checking the time. I have about 30 minutes left.
01:27:59: So let's see. There's a few questions but I think we can get through them.
01:28:06: It's actually kind of working out. I've been worried a little bit because I'm taking a while to answer some of the questions and going on tangents.
01:28:13: But it seems to kind of work out with the questions we have.
01:28:18: So next, ShadowX. Does all common spotting software encode... Oh, I already answered that one.
01:28:27: So VM is kind of like optimization layer rather than something akin to CLR or Chrome V8.
01:28:33: So it has the fundamentals of a VM but the goal is just to resign ahead of time what needs to be run by playing quickly.
01:28:39: It's the same general principle as the CLR or Chrome's V8.
01:28:45: VM is essentially just an environment in which the code can exist and in which the code operates.
01:28:51: And the way the VM runs that code can differ. Some VMs can be purely interpreted.
01:28:59: You literally now maybe just have a switch statement that is just switching based on instruction and doing things.
01:29:05: Maybe it compiles into some sort of AST and then evaluates that.
01:29:11: Or maybe it takes it and actually emits machine code for whatever architecture you're running on.
01:29:16: There's lots of different ways for VM to execute your code.
01:29:20: So the way ProtoFlux executes code and the way CLR or V8 executes code is different.
01:29:28: I think actually V8 does a hybrid where it converts some parts into machine code and some it interprets.
01:29:37: But it doesn't interpret the original typed code. It interprets some of the abstract syntax.
01:29:43: I don't fully remember the details but I think V8 does a hybrid where it can actually have both.
01:29:51: CLR I think always translates it to machine code but one thing they did introduce with the latest versions is they have multi-tiered JIT compilation.
01:30:05: One of the things they do is when your code runs they will JIT compile it into machine code which is actually native code for a CPU.
01:30:16: And they JIT compile it fast because you don't want to be waiting for the application to actually run.
01:30:26: But that means they cannot do as many optimizations.
01:30:28: What they do though is when the JIT compiler makes that code that's done in a very quick way so it's not as optimal, it has a counter each time a method is called.
01:30:41: And if it crosses a certain threshold, say the method gets called more than 30 times, it's going to trigger the JIT compiler to compile a much more optimized version.
01:30:51: It goes really heavy on the optimizations to make much more faster code which is going to take it some time.
01:31:00: But also in the meanwhile, as long as it's doing it, it can still keep running the slow code that was already JIT compiled.
01:31:06: Once the JIT compiler is ready, it just swaps it out for the more optimized version and at that point your code actually speeds up.
01:31:17: So we have code that's being called very often, like the main game loop for example, it ends up compiling it in a very optimal way.
01:31:26: You have something that runs just once, like for example some initialization method.
01:31:30: Like when you start up the engine, there's some initialization that only runs once.
01:31:34: It doesn't need to do heavy optimizations on it because they're just a waste of time.
01:31:38: It speeds up the startup time and it kind of optimizes for both.
01:31:41: And I think the latest version, it actually added, I forget the term for it they used,
01:31:50: but it's essentially like in a multi-stage compilation where they look at what are the common arguments for a particle method
01:31:59: and then assume those arguments are constants and they will compile a special version of the method with those arguments as constants,
01:32:07: which lets you optimize even more because now you don't have to worry can this argument be different values and you have to do all the math.
01:32:14: It can pre-compute all that math that is dependent on that argument ahead of time so it actually runs much faster.
01:32:21: And if we have a method that is oftentimes called with very specific arguments, it now runs much faster.
01:32:28: And there's actually another VM that did this called the LuaJIT, which is like runtime for the Lua language.
01:32:39: And what was really cool about that one is like, even for Lua, it's just considered this kind of scripting language.
01:32:47: LuaJIT was able to outperform languages like C and C++ in some benchmarks.
01:32:53: Because with C and C++, all of the code is compiled ahead of time.
01:32:59: So you don't actually know what kind of arguments you're getting.
01:33:02: What LuaJIT was able to do is be like, okay, this value is always an integer.
01:33:08: Or maybe this value is always an integer that's number 42.
01:33:14: So I'm just going to compile a method that assumes this is 42.
01:33:19: And it makes a super optimal version of the method.
01:33:23: And that runs. It's even more optimized than the C and C++.
01:33:28: Because C and C++ cannot make those assumptions.
01:33:32: There's some profiling compilers that actually run your code.
01:33:38: And they will try to also figure it out.
01:33:40: And then you compile your code with those profiling optimizations.
01:33:45: And can do some of that too.
01:33:47: But it shows there's some benefits to the JIT compilers where they can be more adaptive.
01:33:57: And they can do it for free.
01:33:59: Because you don't have to do it as a PowerView development process.
01:34:05: Because once you upgrade it in your system, you get all these benefits.
01:34:11: And it's able to run your code that's the same exact code as you had before.
01:34:17: It's able to run it much faster because it's a lot smarter how it converts it into machine code.
01:34:24: So next question is, Nitra is asking, how about video players?
01:34:29: Are they already full inside Froox Engine as well?
01:34:31: No, for video players they actually exist more on the Unity side.
01:34:36: And they are pretty much going to remain majorly on the Unity side.
01:34:41: The reason for that is because the video player has a very tight integration with the Unity engine
01:34:46: because it needs to update the GPU textures with the decoded video data.
01:34:52: It adds a bit to the mechanism because we will need to send the audio data back for the Froox Engine to process and send back.
01:35:01: So that's going to be...
01:35:05: Video players are essentially going to be considered as an asset, because even stuff like textures.
01:35:10: When you load a texture, it needs to be uploaded to the GPU memory, it needs to be sent to the renderer.
01:35:15: And the way I plan to approach that one is through a mechanism called shared memory,
01:35:20: where the texture data itself, Froox Engine will allocate in a shared memory,
01:35:24: and then it will pass over the pipe, it will essentially tell the renderer,
01:35:30: here's shared memory, here's the texture information, the size, format, and so on,
01:35:36: please upload it to the memory under this handle, for example.
01:35:39: And it assigns it some kind of number to identify the texture.
01:35:45: And essentially sends that over to the unit engine, the unit engine is going to read the texture data,
01:35:50: around the upload to the GPU, and it's going to send a message back to Froox Engine,
01:35:54: be like, you know, texture number 420 has been uploaded to the GPU.
01:35:59: And Froox Engine knows, OK, this one's now loaded.
01:36:02: And then when it sends it, please render these things, it's going to be like,
01:36:05: OK, render this thing with texture number 422.
01:36:09: And it's going to send it as part of its package to Unity, and the unit will know,
01:36:13: OK, I have this texture and this number, and it's going to prepare things.
01:36:20: It's going to be similar thing with video players, where the playback and decoding
01:36:23: happens on the Unity side, and Froox Engine just sends some basic information
01:36:28: to update it, be like, the position should be this and this,
01:36:34: you should do this and this with the playback, and it's going to be sending back some audio data.
01:36:39: But yeah, those parts are going to remain within Unity.
01:36:48: We have like 25 minutes, and there's only one question right now.
01:36:56: Chief item, I guess something I don't really understand about Resonite
01:36:59: is where simulation actually happens.
01:37:02: Does the server simulate everything and the clients just pull,
01:37:04: or do the clients do some work and send it to the server? It's a mix.
01:37:07: For example, players on IK, Local, ProtoFlux.
01:37:10: Or do all servers and clients simulate everything?
01:37:12: Local, as defined in ProtoFlux, can get pretty confusing pretty fast.
01:37:17: So usually it's a mix.
01:37:19: But the way Resonite works, or FrooxEngine works,
01:37:25: is by default, everything is built around your data model.
01:37:29: And by default, the data model is implicitly synchronized.
01:37:33: Which means if you change something in the data model,
01:37:35: FrooxEngine will replicate it to everyone else.
01:37:41: And the way most things, like components and stuff works,
01:37:44: is the data model itself, it's sort of like an author on things.
01:37:51: It's like the data model says, this is how things should be.
01:37:55: And any state that ends up representing something,
01:38:01: like it represents something out of visual, some state of some system,
01:38:05: that's fully contained within the data model.
01:38:08: And that's the really important part.
01:38:09: The only thing that can be local to the components by default
01:38:13: is any caching data, or any data that can be deterministically
01:38:20: computed from the data model.
01:38:23: So if the data model changes, it doesn't matter what internal data it has,
01:38:26: the data model says things should be this way.
01:38:32: And then the whole synchronization is built on top of that.
01:38:35: The whole idea is by default, you don't actually have to think about it.
01:38:39: The data model is going to handle the synchronization for you.
01:38:43: It's going to resolve conflicts.
01:38:44: It's going to resolve if multiple people change a thing,
01:38:48: or if people change a thing they're not allowed to,
01:38:50: it's going to resolve those data changes.
01:38:55: And you essentially build your chain to respond to data.
01:38:58: So if your data is guaranteed to be synchronized and conflict resolved,
01:39:03: then the behaviors that depend on the data always lead to the same,
01:39:07: or at least convergent, results.
01:39:11: What this does, it kind of gives you benefits to write systems in all different ways.
01:39:18: But the main thing is, you don't have to worry about synchronization.
01:39:25: It just kind of happens automatically.
01:39:29: And it kind of changes the problem where instead of...
01:39:34: Instead of like...
01:39:42: Instead of things being synced versus non-synced being a problem,
01:39:49: it turns it into an optimization problem.
01:39:52: Because you could have people computing multiple kinds of things,
01:39:56: or computing it on the wrong end, things that can be computed from other stuff.
01:40:00: And you end up wasting network traffic as a result.
01:40:04: But for me, that's a much better problem to have than things getting out of sync.
01:40:10: What we do have is the data model has mechanisms to optimize that.
01:40:14: One of those mechanisms being drives.
01:40:18: Drives essentially tell...
01:40:20: The drive is a way of telling the data model,
01:40:22: I'm taking control of this part of the data model.
01:40:26: Don't synchronize it.
01:40:27: I am responsible for making sure this stays consistent when it needs to.
01:40:37: And the way to think about drives is you can have something like a SmoothLARP node.
01:40:41: Which is one of the community favorites.
01:40:44: And the way that works, it actually has its own internal computation that's not synchronized.
01:40:48: Whatever inputs the SmoothLARP is, is generally synchronized.
01:40:53: Because it comes from a data model, but the output doesn't need to be synchronized
01:40:57: because it's convergent.
01:40:59: So you're guaranteed to have the same value on the input for all users,
01:41:04: you can fully compute the output value on each user locally.
01:41:08: Because it's all converging towards the same value.
01:41:12: And as a result, everybody ends up with, if not the same,
01:41:18: at least very similar result on their end.
01:41:22: It is also possible, you can, if you want to, diverge this data model.
01:41:27: For example, value, user override, it does this.
01:41:32: But it does this in an interesting way, because it makes the divergence of values,
01:41:36: it actually makes it part of the data model, so the values that each user is supposed to get
01:41:41: is still synchronized, and everybody knows this user should be getting this value.
01:41:47: But the actual value that you derive with it, that's diverged for each user.
01:41:51: And it's like a mechanism built on this principle to handle this kind of scenario.
01:41:57: You can also sometimes derive things from things that are going to be different on each user,
01:42:02: and have each user see a different thing. You can diverge it.
01:42:06: The main point is, it's a deliberate choice.
01:42:10: At least it should be in most cases, unless you do it by accident.
01:42:13: But we'll try to make it harder to do it by accident.
01:42:21: If you don't specifically make this thing desync, it's much less likely to happen by accident.
01:42:30: Either by bug or misbehaviour.
01:42:32: The system is designed in a way to make sure that everybody shares the same general experience.
01:42:41: Generally, if you for example consider IK, like you mentioned IK.
01:42:48: IK, the actual computations of the bones, that's computed locally.
01:42:54: And the reason for that is because the inputs to the IK is the data model itself which is synchronised.
01:43:01: And the real-time values are hand positions, if you have tracking, it's feet position, head position.
01:43:07: Those are synchronised.
01:43:09: And because each user gets the same inputs to the IK, the IK on everybody's end ends up computing the same or very similar result.
01:43:19: Therefore the actual IK itself doesn't need to be synchronised.
01:43:23: Because the final positions are driven.
01:43:27: And essentially that is a way of the IK saying, if you give me the same inputs, I'm going to drive these bone positions to match those inputs in a somewhat mostly deterministic way.
01:43:42: So that doesn't need to be.
01:43:45: You also mentioned local protoflux.
01:43:47: With local protoflux, there is actually a way for you to hold some data that's outside of the data model.
01:43:54: So locals and stores, they are not synchronised.
01:43:58: If you drive something from those, it's going to diverge.
01:44:03: Unless you take the responsibility of computing that local in a way that's either convergent or deterministic.
01:44:11: So locals and stores, they're not going to give you a synchronous mechanism.
01:44:15: One thing that's missing right now, what I want to do to prevent divergence by accident, is have a localness analysis.
01:44:23: So if you have a bunch of protoflux and you try to drive something, it's going to check.
01:44:31: Is that the source of this value? Is there anything that is local?
01:44:38: And if it finds it, it's going to give you a warning.
01:44:40: You're trying to drive something from a local value.
01:44:42: Unless you make sure that the results of this computation stay synchronized, even if this value differs.
01:44:54: Or unless you make sure that the local value is computed the same for every user or is very similar, this is going to diverge.
01:45:02: And that will make it a much more deliberate choice.
01:45:08: Where you're like, OK, I'm not doing this by accident, I really want to drive something from a local value.
01:45:13: I'm taking the responsibility of making sure this will match for users.
01:45:18: Or if you have a reason to diverge things for each user, it's a deliberate choice.
01:45:24: You're saying, I want this part of the data model to be diverged.
01:45:30: So that kind of answers the question. There's a fair bit I can do to this.
01:45:35: But the gist of it, the local in ProtoFlux, it literally means this is not part of the data model.
01:45:43: This is not synchronized for you. This is a mechanism you use to store data outside of the data model.
01:45:49: And if you feed it back into the data model, you sort of need to kind of take responsibility for making sure it's either convergent.
01:45:59: Or intentionally divergent.
01:46:02: The other part is, this also applies if you drive something.
01:46:06: Because if you use the local and then you drive the value and the value is not driven,
01:46:10: the final value right into the data model ends up implicitly synchronized and you don't have the problem.
01:46:17: So this only applies if you're driving something as well.
01:46:22: So I hope that kind of helps understand this a fair bit better.
01:46:26: Just about 14 minutes left. There's still like one question.
01:46:30: I'll see how long this kind of takes to answer, but at this point I might not be able to answer all the questions if more pop up.
01:46:39: But feel free to ask them, I'll try to answer as many as possible until I get it in the full two hours.
01:46:47: So let's see.
01:46:49: Ozzy is asking, you mentioned before wanting to do cascading asset dependencies after particles, is this something you want to do?
01:46:56: It is an optimization that's sort of independent from the whole move, that I feel could be fast enough.
01:47:05: I still haven't decided, I'm kind of thinking about slotting in before the audio as well, as part of the performance optimizations,
01:47:12: because that one will help, particularly with memory usage, CPU usage, during loading things.
01:47:21: For example, when people load into the world, and you get the loading lag as the stuff loads in.
01:47:29: The cascading asset dependencies, they can particle hell with those cases when the users have a lot of stuff that's only visible to them,
01:47:39: but they know not everyone else, because right now you will still load it.
01:47:46: If you have wars that are cold, or maybe the users are cold, you'll load all of them at once, and it's just this kind of big chunk.
01:47:53: With this system, it will be kind of more spread out.
01:47:57: The part I'm not certain about is whether it's worth doing this now, or doing this after making the .NET 9 switch, because it is going to be beneficial on both.
01:48:06: So it's one of those optimizations that's smaller, and it's kind of independent of the big move to .NET 9.
01:48:14: And it could provide benefit even now, even before we move to .NET 9.
01:48:19: And it will still provide benefit afterwards.
01:48:22: I'm not 100% decided on this one, I'll have to evaluate it a little bit, and evaluate how other things are going.
01:48:30: It's something I want to do, but no hard decision yet.
01:48:38: Next, Climber is asking, could you use the red highlighting from broken ProtoFlux in a different color for local computation?
01:48:46: I don't really understand the question, I'm sorry.
01:48:50: If ProtoFlux is red and it's broken, you need to fix whatever is broken before you start using it again.
01:48:58: Usually if it's red, there's something wrong I cannot run at all.
01:49:06: I don't think you should be using it anyway. If it's red, you need to fix whatever the issue is.
01:49:16: Next, Nitr is asking, are there any plans to open-source certain parts of the code, for example, under the components, and ProtoFlux knows so that the community can contribute to those?
01:49:27: There's some plans, nothing's fully formalized yet.
01:49:32: We've had some discussions about it.
01:49:35: I'm not going to go too much into details, but my general approach is we would essentially do gradual open-sourcing of certain parts, especially ones that could really benefit from community contributions.
01:49:51: One example I can give you is, for example, the Importer and Exporter system, and also the Device Driver system, where it's very ripe for open-sourcing
01:50:08: where we essentially say, this is the Model Importer, this is the Volume Importer, and so on, and this is support for this format and this format.
01:50:19: And we cannot do this. We make the code open, which we offer community contributions, so people could contribute stuff like fixes for formats or some things importing your own.
01:50:33: Or alternatively, we want to add support for some obscure format that we wouldn't support ourselves, because you're modding some kind of game or something and you want to mess with things.
01:50:48: You now can use the implementation that we provided as a reference, add another Importer, Exporter.
01:50:58: Or, if you want to, you need very specific fixes that are relevant to the project you're working on, you just make a fork of one of ours, or even communities, and modify it for its purposes.
01:51:10: And you make changes that wouldn't make sense to have in the default one, but that are useful to you.
01:51:16: So, that's probably another kind of model I would want to follow, that is initially, we open source things partially, where it makes sense, where it's also easy to do, because open sourcing can be a complicated process if you want it for everything, because there's pieces of code that have certain licensing, and we need to make sure it's all compatible with the licensing, we need to make sure everything's audited and cleaned up.
01:51:44: So doing it by chunks, doing just some systems, I feel it's a much more easier, approachable way to start, and we can build from there.
01:51:55: The other part of this is, when you do open source something, you generally need maintainers.
01:52:03: Right now we don't really have a super good process for handling community contributions for these things, so that's something I feel we also need to heavily improve.
01:52:16: And that means we need to have prepared some manpower to look at the community pull requests, make sure we have a good communication there, and make that whole process run smoothly.
01:52:26: And also there's been some PRs that piled up against some of our projects, some of our open source parts, and that we haven't really had a chance to prepare to look at because everything has been kind of busy.
01:52:39: So I'm a little bit hesitant to do it now, at least until we clear up some more things and we have a better process.
01:52:47: So it is also part of the consideration there.
01:52:51: But overall, it is something I'd want to do. I feel like you as a community, people are doing lots of cool things and tools.
01:53:00: Like the modding community, they do lots of really neat things.
01:53:06: And doing the gradual open sourcing, I feel that's a good way to empower you more, to give you more control over these things, give you more control to fix some things.
01:53:16: These parts, we're a small team, so sometimes our time is very limited to fix certain niche issues.
01:53:24: And if you give people the power to help contribute those fixes, I feel like overall the platform and the community can benefit from those.
01:53:34: As well as giving you a way to...
01:53:38: A big part of Resonite's philosophy is giving people as much control as possible.
01:53:43: Making the experience what you want it to be.
01:53:46: And I feel by doing this, if you really don't do something, or how Resonite handles certain things, you can afford that part and make your own version of it.
01:54:00: Or fix up the issues. You're not as dependent on us as you otherwise would have been.
01:54:07: The flipside to that, and the part that I'm usually worried about, is we also want to do it in a way that doesn't result in the platform fragment.
01:54:19: Where everybody ends up on a different version of the build, and then you don't have this shared community anymore.
01:54:27: Because I feel, especially at this stage, that can end up hurting the platform.
01:54:33: If it happens, especially too early.
01:54:36: And it's also one of the reasons I was thinking of going with importers and exporters first.
01:54:42: Because they cannot cause fragmentation, because they do not fragment the data model.
01:54:49: This just changes how things are brought into, or out of, the data model.
01:54:57: It doesn't make you incompatible with other clients.
01:55:01: You can have importers for data formats that only you support, and still exist with the same users.
01:55:08: And be able to join the same sessions.
01:55:11: That's pretty much on the whole open-sourcing kind of thing.
01:55:16: And that's the reason I want to approach it this way.
01:55:19: Take baby steps there, see how it works, see how everybody responds.
01:55:24: See how we are able to handle this, and then be comfortable with this.
01:55:32: We can now take a step further, open-source more parts.
01:55:37: And just make it a gradual process, then just a big flip of a switch, if that makes sense.
01:55:48: We have about 4 minutes left, there's no more questions.
01:55:52: So I might... I don't know if this is enough for Ramble, because right now...
01:56:01: I don't know what I would ramble about, because...
01:56:05: I want to do more Grasslands plus? I've already rambled about Grasslands plus a fair bit.
01:56:10: If there's only less many questions, I'll try to answer it, but I might also just end up a few minutes early.
01:56:15: And as I end up rambling about rambling, which is some sort of meta rambling, which I'm kind of doing right now...
01:56:23: I don't know, I'm actually kind of curious, like, with the Grasslands part thing, is that something you'd like to see? Like, you'd like to play with?
01:56:34: Especially if you can bring stuff like this.
01:56:36: I can show you... I don't have, like, too many videos of those. I don't have, like, one more... no, wait.
01:56:45: Oh! I do have actually one video I didn't want to show. I do need to fetch this one from YouTube, because I haven't, like, imported this one. Hang on.
01:57:00: Let's see... Oh, so, like... Actually, let me just do this first before I start doing too many things at once.
01:57:08: There we go.
01:57:11: So I'm going to bring this one in.
01:57:18: Once it loads... This one's from YouTube, so it's, like, actually a little bit worse quality.
01:57:24: But this is another Gaussian Splatter. Well, I have lots, but I need to make videos of more.
01:57:30: This one I found, like, super neat, because, like, this is a very complex scene. This is from GSR at BLFC.
01:57:37: You can see, like, it captures a lot of, like, kind of cool detail, but there's a particle part I want you to pay attention to.
01:57:44: I'm going to point this out in a sec, because one of the huge benefits of Gaussian Splatters is they're really good at not only, you know, soft and fuzzy details, but also, you know, semi-transparent stuff.
01:57:56: So, watch this thing. Do you see, like, these kind of plastic, well, not plastic, but, like, these transparent, you know, bits? Do you see these?
01:58:05: Look at that. It's actually able to, you know, represent that, like, really well, which is something you really don't get, you know, with a traditional mesh-based, you know, photogrammetry.
01:58:16: And that's actually one of the things, you know, like, if you wanted to represent it as a mesh, you'd kind of lose that.
01:58:22: And that's why, you know, why Gaussians are really good at, you know, other presenting scenes that directional photogrammetry is not.
01:58:32: And there's also lots of, like, you know, splats I've done, like, this October, I was, like, visiting the US, and I went to the Yellowstone National Park again.
01:58:42: I have lots of scans from there.
01:58:45: And, like, you know, a lot of them kind of have, like, you know, some, because there's a lot of geysers and everything, and there's actually, you know, there's, like, steam coming out of the geysers, you know, and there's, like, you know, water, so, like, it's reflective in some places.
01:58:56: And I found, with Gaseous Plus, it actually reconstructs pretty well, like, it even captures, you know, some of the steam and air, and gives it, you know, more of a volume.
01:59:06: So they're, like, a really cool way, you know, of, like, representing those scenes, and I just kind of want to be able to, you know, bring that in and be, like, you know, publish those on Resonite, and be, like, you know, you want to come see, like, you know, bits of, like, you know, Yellowstone, bits of this and that, you know, like, just go to this world and you can just view it and show it to people.
01:59:26: But yeah, that actually kind of filled the time, actually, because we have about 30 seconds left, so I'm pretty much going to end it there.
01:59:36: So thank you very much for everyone, you know, for your questions, thank you very much for, like, you know, watching and, you know, and listening, like, my ramblings and explanations and going off tangents.
01:59:47: I hope you, like, you enjoyed this episode and thank you also very much, you know, for just, like, supporting Resonite, whether it's, like, you know, like, on our Patreon, you know, whether it's just by making lots of cool content or, you know, sharing stuff on social media, like, whatever, whatever you do, you know, it, it helps the platform and, like, you know, we appreciate it a lot.
02:00:16: Warp.