The Resonance/2024-12-01/Transcript: Difference between revisions

From Resonite Wiki
create
 
Regenerate transcript using Whisper
Line 1: Line 1:
{{OfficeHoursTranscriptHeader|The Resonance|2024-12-01|url=https://www.youtube.com/watch?v=rAUkMFkg1_o&list=PLQn4R3khhxITNPmhpSJx5q7-PgeRFGlyQ|autogen=YouTube}}
{{OfficeHoursTranscriptHeader|The Resonance|2024-12-01|url=https://www.youtube.com/watch?v=rAUkMFkg1_o&list=PLQn4R3khhxITNPmhpSJx5q7-PgeRFGlyQ|autogen=YouTube using Whisper}}


0:16: W hello well
00:12: Waaah...


0:23: BL I Lally clicked under minute hello J ven for
00:19: Hello...


0:33: hello [Music] everyone I'm just making sure everything is running good all the stream
00:20: Well, wait...


0:51: stuff make sure my AIO is all good can you hear me fine what I was L by 1.3 seconds also
00:24: I've actually clicked it on the menu.


0:59: thank you for the cheer Nitra it was
00:28: Hello JViden4.


1:04: quick okay so we should be going uh let me make
00:33: Hello everyone.


1:11: sure Channel there we go everything should be going so hello
00:36: I'm just making sure everything is running, got all the stream stuff.


1:16: everyone oh oh boy I'm getting feedback there we go um hello everyone I'm Fus and welcome
00:52: Make sure my audio is all good, can you hear me fine?


1:24: to the third episode of the resonance oh my God thank you so much for the cheers uh emerence and temporal shift hello
00:56: Well, I was loud by 1.3 seconds.


1:31: everyone um so um this is the third episode of the resonance uh it's um
00:59: Also thank you for the cheer Nitra, that was quick.


1:36: essentially like a combination of uh office hours and podcasts uh where you can ask anything about
01:05: Okay, so...


1:43: um uh you know like anything about the resite like whether it's development you know philosophy how things are kind of
01:06: Let me make sure...


1:49: going for development its past its future where it's heading uh the goal is to have like a combination of Q&A so we
01:12: Channel, there we go.


1:56: can ask you know questions uh whatever like you'd like to know I to answer to best of my
01:14: Everything should be going.


2:01: ability um and we also have uh uh hearing that the microphone sounds
01:15: So hello everyone.


2:08: window let me double check uh that OBS didn't switch microphone on me again uh test one to it's in front of my
01:18: Oh boy, I'm getting feedback, there we go.


2:17: face properties yeah it's using the right one um is it
01:21: Hello everyone, I'm Frooxius, and welcome to the third episode of The Resonance.


2:24: understandable it's really strange like it should be compressing
01:26: Oh my god, thank you so much for the cheers Emergence and Temporal Shift.


2:30: like I'm it's a wireless microphone but it's like it's un custom thing uh but anyway uh let me know if
01:30: Hello everyone.


2:37: the voice is like okay if it's understandable um anyway so uh you free
01:32: So, this is the third episode of The Resonance.


2:43: to ask any questions and I'm also like do kind of you know some kind of like General kind of talking about uh you
01:36: It's essentially like a combination of office hours and podcasts.


2:49: know the high level concepts of resonite where it's past it's what its future you know it's going to be which direction we
01:40: Where you can ask anything about...


2:56: want to head it and so on um uh one thing if you want want to ask questions make sure um make sure to put a question
01:45: Anything about The Resonite, whether it's its development, philosophy, how things are going with development, its past, its future, where it's heading.


3:04: mark actually let me double check oh I didn't save the version with the auto add uh AO pin there we go okay now it
01:54: The goal is to have a combination of Q&A, so you can ask questions.


3:13: should be working um make sure to end your question with a question mark that way it kind of pops on my thing um and
01:58: Whatever you'd like to know, I try to answer the best of my ability.


3:21: we already have some like popping up so with that uh you know we should be good to get started so might switch this
02:03: And we also have...


3:27: camera just like this there we go so hello everyone uh hello um hello B at
02:05: I'm hearing that the microphone sounds windy, let me double-check that OBS didn't switch microphone on me again.


3:33: name SK hello try BG hello Le um so we actually got first question from lexo uh
02:13: Test 1-2. It's in front of my face.


3:41: do you think you'll have any other guests on your show that might focus on specific topics um is possible like the
02:18: Properties. Yeah, it's using the right one.


3:48: goal of this kind of is um kind of like you know um sort of like like my kind of
02:22: Is it understandable?


3:55: office hours so that like this probably going to be like the main focus um but I'm also kind of you know playing with
02:25: It's really strange.


4:01: the format a bit unfortunately C couldn't like I usually have C hosting because we canot have good back and for
02:28: Like, it shouldn't be compressing.


4:07: that's on the technical things and sort of like you know the philosophy ver night
02:31: It's a wireless microphone, but it's an un-custom thing.


4:13: um um but I kind of like I don't see where it kind of goes because I had like some ideas and for the first two streams
02:35: But, anyway, let me know if the voice is OK if it's understandable.


4:21: that like I did it was like so many questions that we didn't really get much to the chill parts of it uh so want to
02:42: Anyway, you're free to ask any questions, and I'm also going to do some general talking about the high-level concepts of Resonite, where it's past its, what its future is going to be, which direction we want to head it, and so on.


4:26: kind of like explore that uh we might have like some like other people as well kind of talk to them about like specific
02:59: One thing, if you want to ask questions, make sure to put a question mark. Actually, let me double-check.


4:32: STS I but I don't have any super specific plans just yet
03:07: Oh, I didn't save the version with the auto-add.


4:37: um um so we'll kind of see like I want I want at least for start I kind of want
03:11: Auto-pin. There we go. OK, now it should be working.


4:43: to keep things simple you know take it like essentially with baby steps um but I I feel like probably like
03:15: Make sure to end your question with a question mark, the way it kind of pops on my thing.


4:51: I at some point I'll start like bringing like more people as well so we can kind of you know talk about like how some
03:21: And we already have some light popping up.


4:56: stuff has been going and so on but uh I I'm stucking you know figuring things
03:23: So, with that, we should be good to get started.


5:03: out um so the next question is from emergency temporal shift uh what is the
03:26: I might switch this camera, just like this. There we go.


5:10: funnest particle it kind of depends like I don't know if if you can get like a
03:30: So, hello everyone. Hello, I'm bad at names. Hello Trey Bourg. Hello Lexavo.


5:16: maybe like the thing that comes to mind is like you know particles that would make sounds but that's one of the things
03:37: So, we actually got the first question from Lexavo.


5:21: you cannot do right now um um you m like a particle like you know does like some
03:40: Do you think you'll have any other guests on your show that might focus on specific topics?


5:26: kind of fun plot sound it goes like boink every time is like you know bounces or something like that um
03:47: It's possible. Like, the goal of this is kind of like, you know, sort of like my kind of office hours.


5:34: there's it's actually kind of interesting thing because like it's a reest we got in the past is you know I'm
03:57: So, it's probably going to be like the main focus, but I'm also kind of playing with the format a bit.


5:39: kind of going of attention right now already um where you can um or people want to like you know
04:03: Unfortunately, Cyro couldn't, like, I usually have Cyro co-hosting because we cannot have good back and forth.


5:46: particles to make sound when they collide with something or they generally want event so they can react the only
04:07: It's on the technical things, and sort of like, you know, the fields of your Resonite.


5:51: thing is particles the simulation RS locally on each person uh so it's not
04:14: But I kind of like, I don't see where it kind of goes, because I had like some ideas,


5:57: like 100% insane the people will be seeing similar things but not exactly
04:19: and for the first two streams that I did, there was like so many questions that we didn't really get much to the chill parts of it.


6:03: the same so like for you have you know clump of particles and one goes like um
04:26: So we're going to kind of like explore that.


6:08: one might you know go like this way and for the other person it goes this way so if you have like one person handling the
04:28: We might have like some other people as well, kind of talk to them about like specific stuffs of Resonite,


6:14: events then something might happen you know so like say like for me the particle hits here and for the other
04:33: but I don't have any super specific plans just yet.


6:19: person it hits here so if you do some event at this point then it's going to be like a bit of a disconnect for other
04:40: So we'll kind of see, at least for starters I kind of want to keep things simple, you know, take it like essentially baby steps.


6:26: users because uh for them the particle hit over here or maybe just missed um
04:49: But I feel like probably at some point I'll start bringing in more people as well,


6:32: and it's kind of interesting problem and one way to kind of approach that kind of thing is um make sure effects are only
04:54: so we can kind of talk about like how some stuff has been going and so on, but I'm still kind of figuring things out.


6:38: things that are local so like for example local sound that everybody you know say if I do like bubbles and if you
05:02: So the next question is from Emergence at Temporal Shift, what is the funnest particle?


6:44: like you know bup them and they pop you will hear the pop sound and it's going to be kind of localized mostly to you uh
05:12: And it kind of depends, like I don't know if you can get like a, maybe, like the thing that comes to mind is like you know particles that don't make sounds,


6:51: and if it's like similar enough like it's it's just going to be close and of the people will not notice um but it's it's it's it's a
05:20: but that's one of the things you cannot do right now.


6:59: serious problem and kind of a little bit of tension for the question um so the next one we have J
05:24: You make like a particle that does like some kind of funnest plot sound or goes like bling every time, is it like you know bounces or something like that.


7:08: Von 4 is asking I was confused by the stress test the announcement said it was R on net 8 as opposed to net 9 was there
05:34: There's actually kind of interesting thing because like it's a request we got in the past, as you know I'm kind of going off attention right now already,


7:15: a typo or was meant to establish a baseline um so was kind of Nether um we
05:42: where you can, where people want to like you know particles that make sound when they collide with something, or they generally want events so they can react.


7:21: uh we have like net n headless ready uh we kind of wanted to like you know push it like already like give people time to
05:51: The only thing is particles, the simulation runs locally on each person, so it's not like 100% in sync.


7:28: prepare um but the team running the event they decided to go with the net 8
05:59: The people will be seeing similar things but not exactly the same.


7:34: uh you know for the testing and the main test wasn't even the per like performance of the headlight itself it
06:03: So like for, you may have a clump of particles and one goes like, one might you know go like this way and for the other person it goes this way.


7:40: was make sure that the hardware is running on and the connections is running on is able to handle the event
06:12: So if you have like one person handling the events, then something might happen you know, like say like for me the particle hits here and for the other person it hits here.


7:46: so the focus wasn't as much you know testing the performance of the Headless itself but more kind of combination of
06:21: So if you do some event at this point, then it's going to be like a bit of a disconnect for the users, because for them the particle hit over here or maybe just missed.


7:52: hardware and make sure we are ready for the event you know the setup we have and was also like one of the reasons is
06:32: And it's kind of an interesting problem, and one way to kind of approach that kind of thing is make sure your effects are all the things that are local.


7:58: because we want the event to be as Flawless as possible um the team decided
06:39: So like for example local sound, that way everybody you know, say if I do like bubbles, and if you like you know bop them and they pop, you will hear the pop sound and it's going to be kind of localized mostly to you.


8:04: to go with um you know with net 8 uh because the net n is um you know it
06:51: And if it's like similar enough, it's just going to be close enough that people will not notice.


8:11: technically isn't released yet uh so we kind of know stuck with that even though like the like you know the probability
06:57: But it's a curious problem, and kind of a little bit of tension for the question.


8:16: of it breaking things is very low we just wanted to like even if it was like you know 1% chance that something might
07:06: So the next one we have JViton4 is asking, I was confused by the stress test. The announcement said it was around on .NET 8 as opposed to .NET 9. Was it a typo or was it meant to establish a baseline?


8:23: break we wanted to eliminate that 1% um GL VR is asking is there in focus
07:18: So it was kind of neither. We have a .NET 9 headless ready. We kind of wanted to push it, like give people time to prepare.


8:30: on actual tutorial and there not new user suffer from not understanding the dashboard how to find avatars and how to
07:29: But under the theme running the event, they decided to go with the .NET 8 for the testing.


8:36: find people this seems like an easy thing that can be delegated out and does not need to take up an bound with yes
07:35: And the main test wasn't even the performance of the headless itself, it was to make sure that the hardware is running on and the connections is running on is able to handle the event.


8:42: this actually something that the content team is lar working on uh they've been kind of like looking at you know ways to
07:47: This wasn't as much testing the performance of the headless itself, but more kind of a combination of hardware and make sure we are ready for the event with the setup we have.


8:47: kind of improve the tut experience and overhaul some parts of it so even even the existing experience that we have has
07:57: And it was also one of the reasons because we wanted the event to be as flawless as possible.


8:53: already been mostly delegated um it's kind of part of that because
08:03: The theme decided to go with .NET 8 because .NET 9 technically isn't released yet, so we kind of stuck with that.


8:58: like getting new users to the platform it crosses lots of different systems
08:14: Even though the probability of it breaking things is very low, even if it was a 1% chance that something might break, we wanted to eliminate that 1%.


9:04: because when you start up um you know we want to make sure that uh you know the user has an account that like their
08:29: GlovinVR is asking, is there any focus on actual tutorial on Resonite? Do new users suffer from not understanding the dashboard, how to find avatars, and how to find people?


9:11: audio is working uh so we kind of you know guide them through like the few initial steps and there a part that's
08:37: This seems like an easy thing that can be delegated out and does not need to take up any of your bandwidth.


9:16: more you know on the coding side um so you know if if there's problems with
08:41: Yes, that's actually something that the content team is largely working on. They've been looking at ways to improve it with their experience and overhaul some parts of it.


9:22: that part or there's something we need to toate part then need to be handled by you know our engineering team then the
08:50: Even the existing experience that we have has already been mostly delegated.


9:28: user gets brought into the tutorial that explains things and it's mostly hunted by the content team and um we also had
08:57: It's part of that, because getting new users to the platform, it crosses lots of different systems.


9:34: like Ariel who's our like um um who like new like person like like handling L of
09:04: Because when you start up, we want to make sure that the user has an account, that their audio is working.


9:39: marketing stuff and development like you know Communications and so on uh she's been kind of like looking because she's
09:12: So we guide them through a few initial steps, and that's a part that's more on the coding side.


9:46: relatively new toite as well so she's been kind of using that perspective to be like you know this is what we could
09:21: If there's problems with that part, or there's something we need to add to that part, then it needs to be handled by our engineering team.


9:52: do to smooth out you know the initial user experience and she's been talking with a Content team and we're kind of
09:27: Then the user gets brought into the in-world tutorial that explains things, and it's mostly handled by the content team.


9:58: looking you know how how do we um how do we improve their experience to reduce
09:34: We also had Ariel, who's our new person handling all the marketing stuff and development, communications, and so on.


10:04: frustrations for new users uh and those are like definitely a lot of things you could do uh the only part is like it's
09:44: She's been looking, because she's relatively new to Resonite as well, so she's been using that perspective to be like,


10:10: always difficult like I I would say like you know it's a it's a simple thing um
09:51: this is what we could do to smooth out the initial user experience, and she's been talking with the content team.


10:17: to do because you always have to kind of balance how much information do you give the user at once and in the past we
09:58: And we're looking at how do we improve that experience to reduce frustrations for new users.


10:24: tried approaches you know where we just we told them about everything you know there's like inventory there's contacts there's you know this thing and this
10:07: And there's definitely a lot of things you could do.


10:30: thing and this thing and what we found ends up happening a lot of users they just they get overwhelmed and they just
10:09: The only part is it's always difficult. I won't say it's a simple thing to do,


10:37: kind of shut down and then they like you know they they will not even understand the basic bits like they will not know how to like you know how to grab things
10:17: because you always have to balance how much information do you give the user at once.


10:44: that how to switch like locomotions so you kind of have to you know ease the
10:23: And in the past, we tried approaches where we told them about everything.


10:51: users into it kind of you know build like simple interactions and building those kinds of um building those kinds
10:26: There's inventory, there's contacts, there's this thing, and this thing, and this thing.


10:57: of tutorials it takes a fair bit of time time there's other aspects of this as well for example one of the things that
10:31: And what we found ends up happening. A lot of users, they get overwhelmed, and they just shut down,


11:03: people want to do when they come in here they want to set up their Avatar um and the tricky thing with that
10:40: and they don't even understand the basic bits. They will not know how to grab things, how to switch locomotions.


11:10: is uh that requires you know use of advanced tools like the developer tools and so on it requires use of the Avatar
10:46: So you kind of have to ease the users into it, build simple interactions, and building those kinds of tutorials takes a fair bit of time.


11:16: Creator and the Avatar Creator is something we want to kind of Bri write but that's an engineering task you know
11:00: There's other aspects to this as well. For example, one of the things that people want to do when they come in here,


11:22: that's not something that content team can do right now um so there's there's sort of kind
11:05: they want to set up their avatar. And the tricky thing with that is that it requires use of advanced tools,


11:28: of like aspects to this is like where they can build better tutorial for some things but some things do require some
11:13: like the developer tools and so on, it requires use of the avatar creator.


11:34: engineering work um and we can have to like you know design things in a way
11:17: And the avatar creator is something we want to breathe right, but that's an engineering task.


11:39: that uh we don't also want to like avoid wasting like too much effort on certain things because we know you know we're
11:22: That's not something that the content team can do right now.


11:45: going to rework stuff like the UI like you in the inventory UI is going to be reworked at some point so then it
11:27: So there's a lot of aspects to this. They can build a better tutorial for some things, but some things do require some engineering work.


11:52: becomes question how much time do we invest into tutorial for the current one when we like you know going to replace
11:36: And we kind of have to design things in a way that we don't also want to avoid wasting too much effort on certain things,


11:57: it so some of those parts you know we might just do like a simple tutorial and do like better one later on once the
11:43: because we know we're going to rework stuff like the UI, the inventory UI is going to be reworked at some point.


12:05: engineering task going of come through so there there's just a lot of you know
11:52: So then it becomes a question of how much time do we invest into the tutorial for the current one when we're going to replace it.


12:10: complexities to these kinds of things and um it it takes time to kind of like improve them what helps us the most you
11:57: So some of those parts, we might just do a simple tutorial and do a better one later on once the engineering task comes through.


12:18: know getting information about what are the particle frustration points for new users if somebody new comes to the
12:07: So there's just a lot of complexities to these kinds of things and it takes time to improve them.


12:25: platform what do they get stuck on you know what do they want to do like like what's their motivation because if we
12:16: What helps us the most is getting information about what are the particle frustration points for new users.


12:31: even know okay the user wants to set up their Avatar we can be like okay we're going to put things that kind of direct
12:23: If somebody new comes to the platform, what do they get stuck on? What do they want to do? What's their motivation?


12:36: them in the right direction um but also like you know with the Avatar setup there's there was kind
12:30: Because if we even know the user wants to set up their avatar, we can be like, okay, we're going to put things that direct them in the right direction.


12:44: of combination like you know how do we make the tooling simpler so we don't need as much tutorial for it uh because
12:39: But also with the avatar setup, there's always a combination of how do we make the tooling simpler so we don't need as much tutorial for it.


12:49: one of the things we did uh a few months back is introduce the resonite packages and if the ex there exists aite package
12:49: Because one of the things we did a few months back is introduce the Resonite packages.


12:56: you know for the Avatar that the user wants to use they just track and drop it makes the whole process much simpler we
12:54: And if there exists a Resonite package for the avatar that the user wants to use, they just drag and drop, it makes the whole process much simpler.


13:01: don't have to like you know explain how to use the developer tool how to use the material tool you literally just kind of
13:01: We don't have to explain how to use the developer tool, how to use the material tool, we usually just kind of drag and drop and you have a simple interface.


13:07: track and drop and you have like you know simple interface but that doesn't work you know in 100% of the cases so
13:10: But that doesn't work in 100% of the cases, so it's a particularly challenging problem.


13:13: it's it's a particular kind of like you know challenging problem but yes we it's
13:19: It's something we do talk about on the team, it's something that's important to us.


13:18: it's something we do talk about like on the team um it's something like that's important to us you know we want to ease
13:24: We want to ease in the onboarding of new users, make them as comfortable as possible.


13:25: in like the onboarding of new users make them as comfortable as possible and and we kind of like you know working at it from different fronts like both from
13:29: And we're kind of working at it from different fronts, both from engineering and from the content side, as well as marketing and communications.


13:32: engineering and uh from the content side as well as you know marketing and uh
13:41: Let's see...


13:38: Communications uh let's see so Jac Fox author is asking my question for today
13:41: Jack the Fox author is asking,


13:45: is about protox in what direction do you want the language to evolve going forward what future language features
13:43: My question for today is about ProtoFlux. In what direction do you want the language to evolve going forward? What future language features are you looking forward to?


13:50: are you looking forward to so there's like a bunch um there's actually um um
13:52: So there's like a bunch...


13:57: how do I put it the the funny thing about it is like the if if I look at the actual you know
14:06: There's actually...


14:03: future of like scripting resonite is actually not just protox one of the
14:08: One of the...


14:11: um um one of the like you know things about um the way I kind of view like you know
14:12: One of the things about...


14:17: visual script thing is that it's it's a very it has like its drawbacks and has
14:16: The way I view visual scripting is that it has its drawbacks and has its benefits.


14:23: his benefits and what of dryb is like when you write you know really complex you know when you write really complex
14:24: And one of the drawbacks is that when you write really complex behaviors, it gets a lot harder to manage.


14:29: behaviors it gets a lot harder to manage where you know typical you know text Bas
14:32: Where typical text-based programming language might be simpler.


14:35: programming language might be simpler but one of its benefits is like you literally it's very you know handson you
14:37: But one of its benefits is that you literally...


14:41: literally you know drag wires if I want to you know control this lights I just you know pull out things from this and I
14:39: It's very hands-on.


14:46: drag wires and it has a very handson kind of feeling it's very
14:41: You literally drag wires. If I want to control these lights, I just pull things from this and I drag wires.


14:52: spatial and the way I can imagine the optimal you know way for
14:47: And it has a very hands-on feeling. It's very spatial.


14:58: this to work is is to actually be combined with a you know more typical
14:53: And the way I imagine the optimal way for this to work is to actually be combined with a more typical text-based programming language.


15:03: text based like a programming language but if we have like a lot of like heavy logic you know like a lot of kind of
15:06: Where if you have a lot of heavy logic, a lot of complex behaviors...


15:09: complex behaviors um it's
15:16: It's much simpler to code things that way.


15:15: um like you know much much simpler to kind of like you know like code things
15:22: But then if you want to wire those complex behaviors into the world, that's where visual scripting can come in handy.


15:20: like that way but then if you want to like you know why like why are those complex behaviors into the world that's
15:29: And I think we'll get the most strength by combining both.


15:26: where you know visual scripting can come in handy and I think the we'll get the most strength by combining both and the
15:34: And the way I wanted to approach the typical text-based programming is by integration of WebAssembly.


15:34: way I wanted to approach you know the typical text based programming is by integration of web assembly uh which
15:41: Which will essentially allow you to use lots of different languages, even languages like C and C++.


15:41: will uh uh essentially allow you to like use lots of different languages even languages like CN C++ using you know um
15:50: With those you can bring support for other languages like Lua, Python, lots of other languages.


15:50: with with those you can you know bring support for other languages like you know Lua python you know lots of other
15:57: Write a little bit complex code, and then some of that code might be exposed as a node.


15:56: languages very a little of a complex code and then some of that code might be exposed you know as a node and that node
16:01: And that node you kind of wire into other things, you do maybe a little extra operations.


16:02: you kind of wire into other things you do like you know maybe little extra operations uh it's almost like if you're
16:05: It's almost like, if you're familiar with electronics, it's almost like having an integrated circuit.


16:07: familiar with electronics it's almost like having like you know integrated circuit and integrated circuit you know
16:12: And the integrated circuit, it has a lot of the complex logic.


16:13: it it it has a lot of the complex logic and it could be you know written in um typical language you know compil to
16:16: And it could be written in a typical language, compiled into a WebAssembly module.


16:19: assembly module um and then like you know the the Integrity circuit is going to have you
16:23: And then the integrated circuit is going to have a bunch of extra things around it that are wired into inputs and outputs.


16:25: know a bunch of extra things around it it's kind of you know wired into input puts and outputs and make it easier to
16:29: And make it easier to interface with things.


16:31: like you know interface with things um so to me that's like you know the most
16:36: So to me that's the most optimal state, where we have both.


16:37: optimal State like where we have both and you can combine them in a way where
16:40: And we can combine them in a way where you get the strengths of each, and weaknesses of neither essentially.


16:43: you get the strengths you know of each um and weaknesses of either essentially
16:49: That said, there are definitely things we can do to improve ProtoFlux.


16:49: that said there are definitely things we can do to improve Proto flux the two big
16:53: The two big things I'm particularly looking forward to are nested nodes.


16:55: things I'm particular looking forward to are nested not uh those will let you you know kind of
17:00: Those will let you create essentially package-like functions.


17:01: create like essentially package like functions you'll be able to like Define um if I I kind want to draw this one in
17:03: You'll be able to define...


17:09: so let grab my pen I should probably done this at the start but
17:06: If I... I kinda wanna draw this one in, so...


17:17: um I kind of forgot um let's see I try
17:10: I should probably have done this at the start, but...


17:22: to use this one I'll see if I move to the if I end up moving this is probably
17:17: I kinda forgot...


17:27: going to be too noisy visually um I'm going to get up and let's try this I'm
17:22: Let's see...


17:33: going to move this over
17:23: If I move to the... If I end up moving... This is probably gonna be too noisy visually.


17:38: here hello so um make sure I'm not like colliding with
17:30: I gotta pick it up.


17:45: anything so like the idea is you you essentially Define you know a node where
17:32: And let's try this. I'm gonna move this over here.


17:51: have like you know your set of inputs and this kind of like you know my
17:41: So...


17:57: kind of thinking for like the kind of interface so this would be you know your inputs so for example you can have you
17:43: Make sure I'm not colliding with anything.


18:04: know like volue inputs you can have like you know some impulse inputs and have some outputs you know it could be values
17:46: So the idea is you essentially define a node with your set of inputs.


18:10: as well as you know as well as um impulses and then like inside of the
17:56: And this is my thinking for the interface.


18:15: node like you can do like you know whatever you want um you know maybe this goes here
18:00: So this would be your inputs.


18:21: maybe this goes here this goes here and this goes here and this goes here and here and then this goes here
18:03: So for example you can have value inputs, you can have some impulse inputs.


18:29: or maybe like you know I don't know maybe this goes here and this goes here and once you define this you you
18:07: And you have some outputs. It can be values as well as impulses.


18:36: essentially this becomes its own node that you can then reuse so you get like a node that has
18:14: And then inside of the node you can do whatever you want.


18:42: you know the same interface that you defined over
18:20: Maybe this goes here, maybe this goes here, this goes here.


18:48: there and this is sort of like you know the internals of that node and then you can have instances of that node that you
18:24: And this goes here, and this goes here, and here.


18:54: can use in lots of different places um with this kind of like in a
18:27: And then this goes here.


19:01: mechanism um you'll be able to you know package a lot of common functionality you know into your own custom noes and
18:29: Or maybe this goes here, and this goes here.


19:08: just use them in a lot of places without having to copy all of this you know multiple times which is going to help
18:33: And once you define this, you essentially, this becomes its own node that you can then reuse.


19:15: you know with performance for prolex because um the system will not need to like you know compile essentially the
18:41: So you get like a node that has the same interface that you defined over there.


19:20: same code multiple times but it also help uh you know with the community
18:49: And this is sort of like the internals of that node.


19:25: because you'll be able to build libraries of you know prot noes and just kind of distribute those
18:52: And then you can have instances of that node that you can use in lots of different places.


19:31: and let people use little of your custom notes so I think that's going to be particularly you know big uh feature for
18:58: With this kind of mechanism, you'll be able to package a lot of common functionality into your own custom nodes and just reuse them in a lot of places without having to copy all of this multiple times.


19:39: Proto once this kind of lands it's something that's already um supported
19:14: Which is going to help with performance for ProtoFlux, because the system will not need to compile essentially the same code multiple times.


19:45: internally by the protox VM but uh it's not integrated with FRS engine yet
19:22: But it'll also help with the community, because you'll be able to build libraries of ProtoFlux nodes and just kind of distribute those and let people use a lot of your custom nodes.


19:51: there's other aspect to this as well uh because once we have support for custom nodes we can do lots of cool things
19:34: So I think that's going to be particularly big feature for ProtoFlux on this kind of lens.


19:59: where this essentially becomes like a a function you know like a like an interface so you can have systems um you
19:41: It's something that's already supported internally by the ProtoFlux VM, but it's not integrated with FrooxEngine yet.


20:07: can have systems like um for example the particle system that I'm actually working on and say you want to write you
19:51: There's another aspect to this as well, because once we have support for custom nodes, we can do lots of cool things where this essentially becomes like a function, like an interface.


20:14: know a module for the system uh the particle system could
20:04: So you can have systems like, for example, the particle system that I'm actually working on.


20:19: Bindings that accept you know the essential accept any node that for example has like you know it has like a
20:11: And say you want to write a module for the system, the particle system could have bindings that it accepts.


20:28: three inputs say for example like you know position position uh say
20:22: The essentially accepts any node that, for example, has three inputs.


20:36: lifetime just like how long the particles existed and say
20:29: Say, for example, position, lifetime, that's how long the particle has existed, and say direction.


20:43: Direction and then like you know we have output and the output is a new
20:44: And then you have output, and the output is a new position.


20:49: position and then inside you can essentially do whatever math you
20:50: And then inside you can essentially do whatever math you want.


20:55: want and if your node if your custom node note follows this specific
20:56: And if your node, if your custom node follows this specific interface, like it has these specific inputs, this specific output, it becomes a thing.


21:00: interface like it has these specific inputs the specific output it becomes a thing you can just drop in as a module
21:05: You can just drop in as a module into the particle system to drive the particle's position, for example, or its color, or other properties.


21:07: into the particle system you know to drive you know the particle's position for example or its color you know or or
21:15: And you'll be able to package behaviors and drop them into other known ProFlux functions, and have essentially a way to visually, using the visual scripting, define completely new modules for the particle system.


21:14: other properties and you'll be able to you know package behaviors and drop them into other non perlex functions and kind
21:32: But it expands beyond that. You'll be able to do procedural textures.


21:21: of you know have like have essentially way to visually using the visual scripting Define completely new modules
21:36: Like one node that you might be able to do is one with an interface where you literally have two inputs. Or maybe just one input even.


21:29: um you know for the particle system but it it expands beyond that you'll be able
21:45: Say the UV, that's the UV coordinate and texture, and then a color.


21:34: to do you know procedual textures like one one node that you might be able to do is one you know with interface where
21:53: And then inside you do whatever, and on the output you have a color.


21:41: you literally have two inputs or maybe just one input even uh say like you know the UV that's the UV coordin in texture
21:59: And if it follows this kind of interface, what it essentially does is you get a texture that's like a square.


21:50: and then a color and then like you know inside like you do you do whatever and on the output
22:09: For each pixel, your node gets the UV coordinate and it turns it into a color.


21:56: you have a color and if it follows this kind of interface
22:15: So if you want to make a procedural texture where each pixel can be computed completely independent of all others, all you need to do is define this.


22:02: what this essentially does is you get um if you have a texture you know it's like
22:25: Make sure you have UV input, you have color output, and this whole thing can become your own custom procedural texture.


22:07: a square for each pixel your node gets the UV coordinate and it turns it into a
22:33: Where you just decide, based on the coordinate you're in, you're going to do whatever you want to compute pixel color and it's just going to compute it for you.


22:14: color so if you have if you want to make a procedural texture where each pixel
22:42: And with this, it will also fit in a way that this can be done in a multi-threaded manner.


22:20: can be computed completely independent of all others all you need to do is Define this make sure you have like you
22:49: Because each pixel is independent, so the code is generating the texture, you can call this node in parallel.


22:26: input you have a color output and this whole thing can become your own custom
22:58: This is going to be more complicated once. You'll be able to do your own custom procedural meshes, for example.


22:31: procedural texture where you just decide based on the coordinate you're in you're
23:09: The difference is probably going to be a little bit more complicated, because you'll have to build the geometry.


22:37: going to do whatever you want to compute pixel color and it's just going to compute it for you and with this it it
23:15: But essentially, the way that one might work is you get an impulse, and then you do whatever logic you want to build a mesh, and now you have your procedural mesh component.


22:43: will also fit in a way um that like this can be done in a multi-threaded manner
23:26: And you can just use it like any other procedural component.


22:49: because each pixel is independent so like the code is actually generating the texture can call this node you know in
23:30: I think once this goes in, this is going to be a particle-powerful mechanism.


22:56: parallel so uh this going to be like more complicated ones you know like you'll be able to do ones um uh you know
23:36: A lot of systems that don't have much to do with ProtoFlux right now, they will strongly benefit from it.


23:05: to do your own custom procedual meshes for example uh where uh def going be
23:44: So this to me is going to be a really big feature of ProtoFlux.


23:10: probably a little bit more complicated um because you'll have to like you know kind of build the geometry but essentially the way Dead one might work
23:51: The other one that I am particularly looking forward to, especially implementing it and playing with it, is the DSP mechanism.


23:17: is you know you get an impulse and then like you do whatever logic you want to do build a mesh and like you're done and
23:59: And what that will let you do is make sort of workflows with the nodes to do stuff like processing audio, processing textures, and processing meshes.


23:23: now you have your you know procedural mesh component and you can just use it like any other
24:12: With those, you'll be able to do stuff like build your own audio studio, or music studio.


23:29: procedural component so I think once this kind of goes in this is going to be par powerful
24:20: Where you can do filters on audio, you can have signal generators, and you could pretty much use Resonite to produce music or produce sound effects.


23:35: mechanism um that a lot of like systems that even don't don't have much to do with perlex right now they will strongly
24:31: Or you could use it to make interactive audio-visual experience.


23:41: benefit from it um so this toy is going to be like you know really big feature
24:37: Where there's a lot of real-time processing through audio, and you can feed it what's happening in the world, and change those effects.


23:47: of like prot flux um the other one that I'm particular
24:45: And that in itself will open up a lot of new workflows and options that are not available right now.


23:53: looking forward to especially implementing it and playing with it is the DSP mechanism
24:55: They're a little bit there, but not enough for people to really even realize it.


23:59: and what that will let you do is make sort of workflows with the with the nodes you know to do stuff like
25:02: So the DSP is a big one. Same with the texture one, you'll be able to do procedural textures, which on itself is also really fun to play with.


24:06: processing audio processing textures and processing meshes um with those you you know you'll
25:12: But also you can now, once we have those, you'll be able to use Resonite as a production tool.


24:14: be able to do stuff like build your own you know uh like audio Studio or music
25:18: Even if you're building a game in Unity or Unreal, you could use Resonite as part of your workflow to produce some of the materials for that game.


24:20: studio where you can you know do one like filters you know on audio it can be you can have signal generators and it
25:26: And it gets a lot of the benefits of having it be a social sandbox platform.


24:27: could pretty much use resonate you know to produce music or produce sound effects or you could use it to make you
25:32: Because, say you're working on a sound effect, or you're working on music, or working on procedural texture, you can invite people in and you can collaborate in real-time.


24:33: know interactive uh audio visual experience where like there's like a lot of kind of you know realtime processing
25:43: That's given thanks to Resonite's architecture, it's just automatic.


24:39: to the audio and you can feed it like you know what's happening in the world and kind of you know change those
25:48: If you have your favorite setup for studio, for working on something, you can just save it into your inventory, send it to somebody, or just load it, or you can publish it and let other people play with your studio setup.


24:45: effects and that on itself will open up a lot of like new kind of you know workflows and options uh that are not
26:02: The DSP part is also going to be a big doorway to lots of new workflows and lots of new ways to use Resonite.


24:52: you know available right now or they kind of like they're a little bit there but you know not quite not enough for
26:15: I'm really excited for that part, and also part of it is I just love audio-visual stuff.


24:58: people to like really even like realize it um so the DSP that's a big one um
26:21: You wire a few notes, and now you have some cool visuals coming out of it, or some cool audio, and you can mess with it.


25:05: same look know with the texture one like you'll be able to do procedural textures which on itself it's also like you know
26:28: There's another part for the mesh processing.


25:11: really fun to play with uh but also you can now once we have those you'll be able to use resonite as a Production
26:35: You could, for example, have a node where you input a mesh, and on the output you get a sub-subdivided, smoothed out mesh.


25:17: Tool you know even like say like if you're building a game in unity or andreal you could use resonite as part
26:48: Or maybe it voxelizes, maybe it triangleizes, maybe it applies a boolean filter, or maybe there's some perturbation to the surface.


25:23: of your workflow to produce some of the materials for that game and gets a lot of the benefits you know of having it be
26:57: And that feature I think will combine with yet another feature that's on the roadmap, which is vertex-based mesh editing.


25:30: like you know a Social Sandbox platform because say you're working you know on a on a sound effect or using on working on
27:07: Because you'd essentially be able to do a thing where, say you have a simple mesh, and this is what you're editing.


25:37: music or working in procedural texture you can invite people in and you can collaborate in real time that's given
27:20: And then this mesh, this live mesh, I'm actually going to delete this one in the background because they're a bit bad for the contrast.


25:45: you know thanks to Res ni architecture it's just automatic if you have like you know like your favorite setup you know
27:34: So I'm taking this a little bit for this question, but this is one I'm particularly excited for, so I want to go a little bit in-depth on this.


25:51: for like studio for working or something can just save it into your inventory send it to somebody or just load it load
27:45: Okay, that should be better.


25:56: it or we can publish it and let other people you know play video Studio setup
27:47: So you're editing this mesh, and then you have your own node setup that's doing whatever processing, and it's making a more complex shape out of it because it's applying a bunch of stuff.


26:01: so the uh the DSP part is also I think going to be a big um sort of like a
27:58: And you edit one of the vertex and it just runs through the pipeline, your mesh DSP processing pipeline, and computes new output meshed based on this as an input.


26:08: doorway to like you know lots of new workflows and lots of new ways to like use resonite
28:09: So you move this vertex, and this one maybe does this kind of thing.


26:14: so I'm really excited you know for it part and also like part of it is like I I just love you know audio visual stuff
28:15: You do this kind of modeling, if you're just a blender, this is what you do with the modifiers, where you can have simple base geometry and have a sub-different surface, and then you're moving vertices around, and it's updating the more complex mesh by processing with modifiers.


26:21: like you you wire few notes you know and now you have like some cool visuals coming out of it or some cool audio you
28:33: The mesh DSP combined with the vertex editing will allow for a very simple workflow, but one that I feel is even more powerful and flexible, and also will probably add more performance because our processing pipeline is very asynchronous.


26:27: know and you can mess with it there another part um for the um for the
28:50: Because when I mess with Blender, one of the things that kind of bugs me is if you use modifiers, it takes a lot of processing, the whole interface essentially lags.


26:34: mesh processing because you could for example have uh there could be like a
29:02: The way stuff is versioned in Resonite is you will not lag as a whole, but only the thing that's updating will maybe take, say this takes a second to update and I move the vertex, I'll see the result in a second, but I will not lag entirely for a second.


26:39: node where you input a mesh and on the output you get like a subsurface like
29:17: So that itself I think will combine really well with lots of upcoming features and all sorts of existing features.


26:44: sub like sub subdivided smoothed out mesh or you do like you know maybe it viles it maybe it triangulates maybe it
29:25: And for me that's just a big part, even just beyond ProtoFlux, it's how I like to design things.


26:51: applies you know Boolean filter or you know maybe there's some perturbation to the
29:36: This way where each system is very general, it does its own thing, but also it has lots of ways to interact with lots of other systems.


26:56: surface and that feature uh I think will combine with yet another feature uh
29:58: So this should cover it. I'm going to hop back here.


27:03: that's on the road map which is uh like vertex based mesh editing because you'd essentially be able to do a thing where
30:05: I went also deep on this particular question, but hopefully that kind of sheds some idea on some of the future things and things I want to do, not just in ProtoFlux, but with other things.


27:10: you know say like you have like a simple mesh and like this is what you're
30:26: There we go. Sorry, I'm just settling back in.


27:20: editing and then like this mesh you know this this live mesh I'm actually going to delete this one in the background
30:33: I hope that answers the question in good detail.


27:26: because uh there bit uh bad for the [Music]
30:41: So next question we have...


27:33: contrast so I'm taking a little bit like for this question but this one this is one I'm particularly excited for so I
30:47: Troy Borg is asking, you said you had a side project you wanted to work on when you get done with particle system before starting audio system rework. Are you able to talk about it?


27:40: want to go a little bit like in depth on
30:57: Yes, so the thing I was kind of thinking about is...


27:45: this okay that should be better so you're editing this mesh and then we have like you know your own note setup
31:05: Essentially I've been playing a lot with Gaussian Splathing recently, and I can actually show you some of the videos.


27:51: that's you know doing what of a processing and you it's making like a more complex shape out of it because
31:15: Let me bring some of my Splats.


27:56: it's applying a bunch of stuff and you edit like one of the vertex and it just runs through the
31:19: The only way I can actually show you these is through a video.


28:02: pipeline uh you know your mesh DSP processing Pipeline and compus new output meshed based on this as an input
31:23: This is one I did very recently, so this is probably one of the best scans.


28:09: so like you move this vertex and this one you know maybe it does you know like this kind of thing you do this kind of
31:32: You can see if I play this, I almost ask you if this is loaded for you, but it only needs to be loaded for me.


28:16: like modeling if you're like if you to blender this is what you do with the modifiers where like you know we can have simple based geometry and have like
31:40: If you look at this, this is a scan of a fursuit head of a friend who is here in Czech Republic.


28:23: you know subdiv surface and they like moving vertices around and it's updated the more complex mesh by processing with
31:47: His name is Amju.


28:31: modifiers um the mesh DSP combined with the vertx editing will allow for a very
31:48: He let me scan his fursuit head.


28:37: similar workflow but one that I feel is even more more powerful and flexible and
31:51: I first reconstructed it with a traditional technique, but then I started playing with Gaussian Splat Software.


28:44: also will probably a lot more performant because uh our you know processing pipeline is very asynchronous because
31:56: I threw the same dataset at it, and the result is incredible.


28:51: one of the things like when I like mess with blender one of the things that kind of bugs me is like you know if you use modifier it like takes a lot ofure
32:00: If you look at the details of the fur, the technique is capable of capturing the softness of it.


28:58: processing the whole interface essentially lags the way stuff is verite
32:09: It just looks surreal.


29:03: is like you will not lag as a whole but only the thing that's updating will maybe take like you know say this takes
32:15: That's the easiest way to describe it.


29:09: a second to update and I move the vertex I'll see the result in a second but I will not L entirely for a second so that
32:19: It gives you an incredible amount of detail, while still being able to render this at interactive frame rates.


29:18: itself I think will you know combine really well with lots of you know upcoming features and also lots of
32:28: I've been 3D scanning for years.


29:24: existing features and for me that's the it's uh that's just big you know big part
32:31: I love 3D scanning stuff and making models of things.


29:31: even just beyond prot flux it's how I like to design things this way where each system is very
32:35: And this technique offers a way to reconstruct things.


29:39: general it does its you know own thing but also it has like lots of ways to interact with lots of other systems
32:41: I can actually show you how the result of this looks with traditional photogrammetry.


29:46: because that way you get all these kind of emergent workflows that that become
32:54: So if I bring this, you see this is traditional mesh.


29:51: like you know very powerful and you get lots of ways to combine those systems you know into like a single pipeline so
32:58: And it's a perfectly good result. I was really happy with this.


29:59: this should kind of cover it um I'm going to hop back
33:03: But there's no softness in the hair. There's artifacts around the fur.


30:05: here I went I went a little deep like on this particular question um but
33:10: It gets kind of blob-ish. It loses its softness that the Cassian Splats are able to preserve.


30:13: hopefully hopefully uh that kind know shed some idea on like some of the kind of future
33:18: This is another kind of example.


30:18: things and things I want to do you know with um of it just prox but other
33:20: I took these photos for this in 2016. That's like 8 years ago now.


30:23: things um there we go
33:29: And also, if you just look at the part, it just looks real.


30:29: sorry I'm just settling back in um so I hope that answers the
33:35: I'm really impressed with the technique. I've been having a lot of fun with it.


30:35: question like in a good detail
33:41: And on my off-time, I've been looking for ways...


30:40: um so next question we have
33:53: I'm kind of looking at how it works.


30:45: um uh troyborg is asking you said you had a side project you wanted to work on when
33:56: And the way the Cassian Splats work, it's relatively simple in principle.


30:52: you get done with particle system before starting audio system reor are you able to talk about it yes so the thing I was
34:01: It's like an extension of point cloud, but instead of just tiny points,


30:59: kind of thinking about is um um um essentially I've been playing a
34:05: each of the points can be a colorful blob that has fuzzy edges to it,


31:06: lot with gassian splatting recently and I can actually I can show you you know some of the
34:11: and they can have different sizes.


31:11: videos um let me bring some of my
34:12: You can actually see some of the individual splats.


31:17: Splats so right now I only the only way I can actually show you these is you know through a video so this is um this
34:16: Some of them are long and stretched. Some of them can be round.


31:24: is one I did uh very recently so this is is probably one of the best
34:19: Some are small, some are big.


31:30: SCS um you can see if I if I play this
34:22: And they can also change color based on which direction you're looking at them from.


31:35: I'm like I almost asked you like this this loaded for you but like it don't need to be loaded for me but if you look
34:28: So I've been essentially looking for a way,


31:41: at this this is a scan of a fuit head of like a friend who's here like in chck Republic um his name is amju um he let
34:30: can we integrate the Cassian Splatting rendering into Resonite?


31:49: me scan like his first suit head and I first reconstructed with the traditional technique but then I started playing
34:35: I'm fairly confident I'll be able to do it at this point.


31:54: with gting split software I threw the same data set at it and like the result is incredible like if you look at the
34:39: I understand it well enough to make an implementation.


32:01: details of the fur like it the technique is capable you know of capturing the you
34:43: The only problem is I don't really have time to actually commit to it right now


32:07: know the softness of it um you know like it it just
34:46: because I've been focusing on finishing the particle system.


32:13: like it just looks real like that's that's that's that's the easiest way to describe it is like it gives you
34:49: But the thing I wanted to do is, after I'm done with the particle system,


32:20: incredible amount of detail while still being able to Rend this at like you know
34:55: mix in a smaller project that's more personal and fun,


32:25: interactive frame rates and I really like like I've been like 3D scanning for
35:00: just like a mental health break, pretty much.


32:30: like years like you know I love like 3D scanning stuff and making like models of things and this technique it offers a
35:03: It's something to do this primarily for myself,


32:37: way to reconstruct you know things um I can actually hold on uh let me also
35:09: because I want to bring those scans in and showcase them to people.


32:42: bring one more just um I can show you
35:15: I'm still 100% excited, I'll see how things go,


32:48: how uh how the result of this looks with traditional photogrametry so if I bring this you see this is this
35:18: but I'm itching to do this and doing a little bit of research


32:57: is traditional mesh and it's still it's a pretty good result like I was like really happy with this but like if you
35:23: on the weekends and so on to get this going.


33:03: look you know there's like no softness in the hair it you know there's like there's like artifacts Around the Fur it
35:28: It's something I like to do.


33:09: gets kind of it gets kind of like you know blob it's like it loses its softness uh that the cassin splits are
35:33: Also something that a lot of people would appreciate as well,


33:16: able to preserve this is another kind of example um I took these pH photos for this like
35:35: because I know there's other people in the community


33:23: in 2016 like it's like I think for like 8 years ago now um and it's just like
35:38: who were playing with Splats and they wanted to bring them in.


33:30: also like if you just look at the part it it just looks real like I'm I'm
35:41: I think it would also make Resonite interesting to a lot of people


33:35: really like impressed like with technique I've been like kind of having a lot of fun with it um and I've been kind of like you know
35:46: who might not even think about it now,


33:42: like on my off time I've been um I've been kind of like um I've been like
35:47: because it's essentially going to give you a benefit


33:50: looking for ways um I kind of like looking to how it like
35:50: to visualize the gas in Splats in a collaborative sandbox environment.


33:55: works and if I to C SPL like War It's relatively simple in principle like there it's it's like an extension of
35:55: It might even open up some new doors.


34:02: Point Cloud where instead of like you know just tiny Points each each of the points can be like you know a colorful
35:58: I'm not 100% decided, but pretty much this is what I've been thinking about.


34:08: blob it's like a has like fuzzy edges to it and they can have different sizes some of them you know you can actually
36:08: Next, Noel64 is asking,


34:14: see some of the individual spls you know some of them are like long and stretched some of them can be round you know some
36:09: Are there plans to add Instant Cut options for cameras?


34:19: are small some are big uh and they can also change color based on which direction you're looking at them from um
36:13: The current flight from one place to another seeking looks a bit weird with the overloading distances.


34:28: so I've been essentially looking for a way you know can we integrate gasing spotting rendering into resonite and I'm
36:17: You can already do this, so there's an option.


34:35: fairly confident like I'll I'll be able to do it at this point like I understand it like well enough to make an
36:21: I just have it at default, which does have the fly,


34:42: implementation uh the only problem is you know I don't really have time to actually commit to it right now because I've been focusing on finishing the
36:29: but there's literally a checkbox in my UI,


34:48: particle system but um the thing I wanted to do is you know after I'm done
36:32: Interployed between Anchors.


34:54: with a particle system mix in like you know a smaller project that's more personal and fun just sort of like you
36:33: If I uncheck that and I click on another,


35:00: know like a mental health break pretty much it's something you know just kind of do
36:36: like, you know, this is instant,


35:06: this primarily for myself um because I want to like you know bring those skins
36:37: I will click over there if I can re-


35:11: in and you know showcase them to people um I'm still not 100% decided I'll kind
36:39: No, there's a collider in the way.


35:17: of see how things go but I mean kind of like you know itching to do this and doing aot the B of research you know
36:43: I'm just going to do this.


35:22: like in in like you know like on the weekends and so on to get this going so
36:44: I click on it, you know, I'm instantly here.


35:28: it's something um something I like to do I think also something that like a lot of
36:47: So that feature already exists.


35:34: people would appreciate as well because I know there's other people in the community uh who were you know playing
36:49: If it kind of helps, I can just, you know, keep this one on


35:39: with Splats and they wanted to bring them in and I think would also make like you know resonate kind of interesting to
36:52: so it doesn't do the weird fly-through.


35:46: a lot of people who might not even think about it now because um it's essentially going to give you ability to you know
36:57: But yes, I hope I have the answers to the question.


35:51: visualize the gas in splits in a collaborative like you know stbu environment so it might even open up
37:02: Next, Wicker Dice.


35:56: like some new doors but I'm not 100% decided but pretty much this is kind of
37:04: What would you like a Resonite to be in five years?


36:02: what I've been kind of thinking about um next uh n 64 is asking H are
37:06: Is there a specific goal or vision?


36:10: there plans to add instant cut options for cameras do current fly from one place to another seeking looks a bit weird video over longing distances uh
37:08: So for me, like, the general idea of Resonite is


36:17: you can already do this so there's an option uh I just have it a default which
37:16: it's kind of hard to put it in words sometimes


36:23: does have the um which does you know have the fly
37:18: because in a way that would be a good way to communicate


36:29: but there's literally a check boox in my UI inter ployed between anchors if I uncheck that and I click on another like
37:22: but it's almost like, it's like a layer.


36:36: you know this is instant and I will click over there if I can read no there's a c way
37:29: It's like a layer where you have certain guarantees.


36:42: um just going to do this uh I click on it you know I'm instantly here so that
37:33: You're guaranteed that everything is real-time synced.


36:47: feature already exist uh if it um if it kind of helps I can just you know keep this one on so it doesn't do the weird
37:36: Everything is real-time collaborative.


36:54: fly through but yes I hope I would answers the
37:38: Everything is real-time editable.


37:00: question uh next uh wicker di uh what would you like to be 5 years is there
37:43: You have integrations with different hardware.


37:06: specific goal or Vision so for me like um the general idea of resonite
37:45: You have persistence.


37:15: is it's kind of hard to like put in words sometimes because um in a way that like would be a good way to communicate
37:47: You can save anything, whether it's locally or through cloud,


37:22: but it's almost like it's like a layer um
37:51: but everything can be persisted.


37:29: it's like a layer where you have certain guarantees you guarantee that everything
37:55: And what I really want Resonite to be is


37:35: is Real Time synced everything is a real time colaborative everything is a real time you know
37:58: be this layer for lots of different workflows


37:41: editable um you you have Integrations with different Hardware you have you
38:03: and for lots of different applications.


37:46: know persistence you can save anything you know whether it's like locally or from like you know Cloud but everything
38:06: The earlier one is social VR, where you hang out with friends,


37:51: can be persisted um and what I really want
38:10: you're watching videos together,


37:57: resonate to be is like you know be this layer for lots of different workflows
38:13: you're playing games together,


38:03: and for lots of different applications and the most clear one is you know social VR where just you know you're
38:16: you're just chatting or doing whatever you want to do.


38:09: hanging out with friends you're watching videos together um you know you're playing games together you're just you
38:21: But if you think about it, all of it is possible


38:16: know chatting or like you know um doing whatever you want to do but if you think
38:23: thanks to this baseline layer.


38:21: about it like you know all of that is possible thanks to this you know Baseline layer but there's also other
38:26: But there's also other things you can do


38:27: things can do which also benefit from the social effect and it kind of ties into what what I've been talking about
38:27: which also benefit from that social effect.


38:33: earlier uh which has to do vide using night you know as a work tool as part of your pipeline because um if you want to
38:30: And it kind of ties into what I've been talking about earlier,


38:41: be you know working on music if you want to be more you know making art if you want to be like you know doing some
38:34: which has to do with using Resonite as a work tool,


38:46: designing and planning you still benefit you know from all these aspects of the software uh you
38:37: as part of your pipeline.


38:53: know being able to collaborate in real time being able to like you know like if if I'm working on something and showing
38:39: Because if you want to be working on music,


38:58: something you immediately see you know the results of it and you can like you know mess with it like you know modify
38:43: if you want to be making art,


39:04: it and can build your own applications on it like people given there's a nice nature you
38:45: if you want to be doing some designing and planning,


39:10: can build your own tools and then you know share your toools like with other people as well if you want to so for me
38:49: you still benefit from all these aspects of the software,


39:18: what I would really want reson it to be is sort of like a foundation for lots of different applications that goes just
38:53: being able to collaborate in real time.


39:24: you know beyond just social VR but which enriches pretty much what whatever task
38:56: If I'm working on something and showing something,


39:32: you want to imagine um you know with with that social VR with the realtime
38:59: you immediately see the results of it.


39:38: collaboration and persistence and networking you know kind of aspect think of it as you know something like Unity
39:03: You can modify it and can build your own applications on it.


39:45: or your own andreal because those engines or c i shouldn't forget that one
39:07: People, given Resonite's nature, can build their own tools.


39:50: um these engines they're they're been maybe primarily know designed for
39:12: And then share those tools with other people as well, if you want to.


39:56: building games but people do lots of different stuff with them you know they build like you know scientific
39:17: So for me, what I really want Resonite to be


40:01: visualization applications you know medical training applications like you you some people build actual just you
39:20: is a foundation for lots of different applications


40:07: know utilities with them um and they're very general tools which solve some
39:23: that goes beyond just social VR,


40:15: problems for you so you don't have to worry about you know lowlevel Graphics programming in a lot of cases you don't have to worry about you know having
39:27: but which enriches


40:22: basic kind of functional engine uh you kind of you know you kind of get those like you know for free in
39:29: pretty much whatever task you want to imagine


40:28: quotes you know like in in a sense you don't need to spend time on them that's already provided for you and you can
39:34: with that social VR,


40:33: focus more on what your actual application is whether it's a game whether it's a tool whether it's you
39:37: with the real-time collaboration,


40:39: know research application um whatever you want to build I want to do the same
39:39: and persistence, and networking, and that kind of aspect.


40:44: but go a level further where instead of like you know just providing the engine you get all the things I mentioned
39:42: Think of it as something like Unity,


40:51: earlier you get real time collaboration whatever you build it supports real time collaboration it supports persistence
39:45: or your own Unreal, because those engines,


40:58: you can save it it's you already have Integrations you know with lots of different Hardware you know like
39:47: or Godwell, I shouldn't forget that one,


41:04: interactions like grabbing things you know like that's that's just given you don't have to worry about that you can
39:51: these engines,


41:09: build your applications around that um so I wonder as it to be sort of like you
39:54: they're maybe primarily designed for building games,


41:15: know um like almost the next level you know beyond game
39:56: but people do lots of different stuff with them.


41:24: engines um and another kind of analogy I use for this is is if you look at early
39:59: They build scientific visualization applications,


41:29: Computing um comp like you know when when we when computers were like you know big and room scaled the way they
40:03: medical training applications,


41:36: had to be programmed is you know with like you know punch card for example I don't know if that was like the very first method but it's one of the
40:05: some people build actual just utilities with them.


41:42: earliest and it's very difficult because you have to like you know you have to write your program you know and you have
40:11: They're very general tools,


41:47: to translate it in individual like you know numbers on the punch card and then you know later on there came assembly
40:13: which solve some problems for you,


41:54: programming languages and that those made it easier they let you do that let
40:16: so you don't have to worry about


41:59: you do more in less time but it was still like you have to you know think about managing your memory managing your
40:17: low-level graphics programming in a lot of cases,


42:05: stack um you need to decompose you know complex like task into these like you
40:19: you don't have to worry about having


42:10: know primitive instructions and it still takes a lot of mental effort and then later on came you know a higher level
40:21: a basic kind of functional engine.


42:17: programming languages you know like um I'm kind of skipping all but like you know say C C++ you know Java C Anda
40:24: You kind of get those for free, in quotes.


42:25: languages like Python and they like added further abstractions where you know for example it's uh you know modern
40:29: In a sense, you don't need to spend time on them,


42:32: C and C++ you don't have to like worry about memory management as much at least not you know managing your stack and now
40:31: that's already provided for you.


42:38: like you know now now some of the things you have to worry about they're automatically managed like you you just you don't even
40:33: And you can focus more on what your actual application is,


42:45: have to think about them you can just focus my function accepts these values you know outputs this value and it like
40:36: whether it's a game, whether it's a tool,


42:51: you know generates the appropriate you know stack management code and for you um
40:38: whether it's a research application,


42:57: and then you know came like tools built with those languages you know like like like I mentioned Unity or under where you don't have to worry about Oro where
40:41: whatever you want to build.


43:05: you don't have to worry about you know having the game engine like you know being able to render stuff on screen
40:43: I want the Resonite to do the same, but go a level further,


43:10: that's already provided with you and with resonite the goal is to essentially move
40:46: where instead of just providing the engine,


43:17: even further along you know this kind of progression to make it very you don't have to worry about the networking
40:49: you get all the things I mentioned earlier.


43:23: aspect the persistence aspect no Integrations with Hardware you're just going like you know given that and you
40:51: You get real-time collaboration.


43:28: can focus more of your time on what you actually want to build in that kind of environment so that's pretty much you
40:53: Whatever you build, it supports real-time collaboration.


43:36: know that's the that's the big kind of like Vision I have like on my end for what I want the resite to
40:57: It supports persistence, you can save it.


43:42: be uh epic Eon is asking uh what are your thoughts on putting arrows on
41:00: You already have integrations with lots of different hardware,


43:49: generic uh type wires um I'm not actually sure if I
41:03: interactions like grabbing things, that's just given.


43:55: fully understand that one um I don't know what mean like generic type wires
41:07: You don't have to worry about that,


44:01: like do mean like uh wires that are of the type type I probably need like a
41:09: you can build your applications around that.


44:07: clarification for this one um yeah s i can I don't know like how
41:13: I want the Resonite to be almost the next level,


44:14: how to interpret the particle question so I'll uh oh he's asking arrows and wires I'm some them kind of have
41:21: beyond game engines.


44:23: arrows I mean I think so like a the impulse ones actually have
41:25: Another kind of analogy I use for this is,


44:28: arrows I'm not really sure like I probably need to like see an image or something uh next Zid just said is
41:28: if you look at early compute thing,


44:36: asking select boxes of code that take inputs and give outputs allowing for coding interface with flug without
41:32: when computers were big and room-scaled,


44:41: having to build some parts of the function using the nodes yes yeah pretty much like you you can your protl node
41:36: the way they had to be programmed is with punch cards, for example.


44:49: becomes a function that you can then then other systems can call without even
41:39: I don't know if that was the very first method,


44:54: like without even needing to know it's prot profile like they're just like you know they're just like I'm going to give you these values and I expect this value
41:41: but it's one of the earliest.


45:01: you know as the output and if you're not you know much as the pattern then you can use it you know give it to those
41:43: And it's very difficult because you have to write your program,


45:07: other systems uh next question Tor that's that
41:47: and you have to translate it in the individual numbers on the punch card,


45:13: does sound amazing is that something for after sauce custom perlex noes so um
41:51: and then later on there came assembly programming languages.


45:19: it's not related to Sauce that's fully for extension side so technically it doesn't matter um what Happ before or
41:55: And those made it easier,


45:27: sauce it's like it's not dependent on it in any way um there is part it is which is
41:57: they let you do more in less time,


45:34: having custom Shader support which you do want to do with perlex that one does
42:01: but it was still like, you have to think about managing your memory,


45:39: require you know switch to Source because with unity the options to do custom shaders are very limited um and
42:04: managing your stack.


45:48: very kind of hacky uh so that one will probably wait but for for the one the parts I was
42:06: You need to decompose complex tasks into these primitive instructions,


45:54: talking about earlier those will have you know regardless of when SAU comes in
42:11: and it still takes a lot of mental effort.


45:59: like it it might happen after s comes in it might happen before it comes in but it's just purely you know how the timing
42:14: And then later on came higher-level programming languages.


46:06: ends up working out and how the prioritization ends up working out uh next question Shadow X I'm just checking
42:18: I'm kind of skipping a lot, but say C, C++, C-sharp,


46:13: time uh Shadow X with Nesta nodes will custom no be able to AO update when search template for script node is
42:24: and languages like Python.


46:19: changed yes they will um there's like multiple way interpreters as well but if
42:26: And they added further abstractions where, for example,


46:24: you have a template and you have it like used in lots of places if you change the
42:29: of its modern C and C++,


46:29: internals of the Noe every single instance is going to be reflected so you can actually you know have it like used
42:33: you don't have to worry about memory management as much,


46:35: in lots of objects in the scene and um and you know and like you need to change
42:35: at least not managing your stack.


46:41: something about its internals everything is going to be reflected in the scene the other interpretation is you
42:38: And now some of the things you have to worry about,


46:47: know if you make a library of notes uh and say you know like you referen that
42:42: they're automatically managed.


46:53: like you know in your world and the author of the library publishes an update version is that going to also update you know other worlds like you
42:44: You don't even have to think about them.


47:01: know which which do use that Library uh that one that would be handled by the
42:46: You can just focus my function, accept these values,


47:07: molecule system which is you know our plan system for sort of like versioning
42:49: output this value, and it generates appropriate stack management code for you.


47:12: and we want to use it not just for resonate itself but also for prot flag
42:57: And then came tools built with those languages,


47:17: so you can you know publish you know your library functions and so on and with that what we will do is let you
43:00: like I mentioned, Unity or Unreal,


47:23: define you know rules on when to a to update and what not um we probably follow something you know
43:02: where you don't have to worry about, or Godot,


47:29: like seating versioning so like if it's like you know minor update it auto updates unless you for like you know
43:05: where you don't have to worry about having the game engine,


47:34: unless to disable that as well if it's a major update it's not going to aut to update unless you specifically ask it to
43:09: being able to render stuff on screen.


47:41: um so that's going to be like you know just the other part of it that one's definitely going to like um this one
43:10: That's already provided with you.


47:48: going to give you more of a choice uh next question Troy Borg so
43:13: And with Resonite,


47:53: could we have something like bubble note that has all the code for them floating around randomly uh the random Lifetime on it um
43:15: the goal is to essentially move even further along this kind of progression


48:01: I'm not F sure what you mean bubble note um but yeah pretty much you can package you know all the code you know for
43:19: to make it where you don't have to worry about the networking aspect,


48:06: whatever the bubble whatever you want the bubble to do is in that Noe and it doesn't need to be like if you me for
43:23: the persistence aspect, integrations with hardware,


48:12: example my bubbles like oh that didn't
43:26: you're just given that,


48:19: work there we go like this one you see like I have this bubble and this bubble
43:28: and you can focus more of your time


48:25: it has like you know code on it that handles you know it handles it flying around uh and when I right now when I
43:30: on what you actually want to build in that kind of environment.


48:33: make the bubble it actually duplicates all the code with a bubble which means the protox VM needs to compile all this
43:34: So that's pretty much,


48:40: code it's relatively simple so is not as much but still it adds up especially you have like you know if you had like
43:36: that's the big vision I have on my end


48:45: hundreds of these or thousands um with a nested nose all this
43:39: for what I want Resonite to be.


48:51: bubble will need to do is you know just reference that template and and only need one of it which means it's just
43:43: I think Easton is asking,


48:57: going to reuse the same you know compiled like instance of the particle node instead of duplicating literally
43:46: what are your thoughts on putting arrows on generic type wires?


49:03: the entire thing on each of the each of the you know objects you make
43:54: I'm not actually sure if I fully understand that one.


49:09: independently so next uh question from Tober
43:58: I don't know what you mean, generic type wires.


49:15: like uh like for processing textures you could do stuff like level curves adjustments like in blender classroom
44:01: Do you mean wires that are of the type type?


49:22: the texture for albo they just uh they adjusted with levels and gray scale and plug into hide map instead of separate
44:06: I probably need a clarification for this one.


49:28: texture yeah like you you could um I mean I can fly understand like how to
44:11: Sorry, I can't...


49:35: map this question toite but uh because I'm not familiar with blender classroom but you could you know Define
44:13: I don't know how to interpret the particle question, so I'll...


49:43: your own procedural texture and then you know use it under stuff that procedural texture it it will end up as actual bit
44:17: Oh, he's asking arrows and wires.


49:49: mop it's going to you know it's going to run your code to generate the texture data upload it to the GPU and at a point
44:24: I think the impulse ones actually have arrows.


49:55: it's just the normal texture uh but they able to do stuff
44:30: I'm not really sure, I probably need to see an image or something.


50:01: like that or at least like you know similar uh next question Dusty sprinkles I'm also just checking how many it is
44:35: Next, zitjustzit is asking,


50:07: fair bit of questions uh uh Dusty sprinkles is asking when we get custom
44:37: select boxes of code that click inputs and give outputs,


50:14: nodes do you think we'll be able to create our own default node UI uh I could see using audio DSP for custom
44:39: allowing for coding interface with flux without having to build some parts of the function using the nodes.


50:19: nodes to make do um so the custom UI for the protx NOS
44:44: Yes.


50:26: that's completely independent from Custom nodes um like it's it's something
44:47: Your protoflux node becomes a function that other systems can call


50:32: like you know we could also offer um but it's pretty much like a completely separate feature um because the the
44:53: without even needing to know its protoflux.


50:38: generation of the node is you know it's technically outside with the main Port Flex itself it's sort of like the UI to
44:57: They're just like, I'm going to give you these values and I expect this value as the output,


50:44: work like interface with a prolex uh we could add mechanisms you know to be able
45:02: and if your node matches that pattern, then you can give it to those other systems.


50:49: to do custom noes uh there are some parts of that like I'm a little bit careful with because usually you have
45:10: Next question, Treyborg, that does sound amazing,


50:56: you can have you know hundreds or thousands of the nodes and having that like customizable or having customizable
45:14: is that something for After Sauce custom protoflux nodes?


51:02: system that can end up like you know a performance concern because the customization it can depending on how
45:18: So, it's not related to Sauce, that's fully Froox Engine side,


51:08: it's done it can add a certain amount of overhead uh but it's not something you
45:22: so technically it doesn't matter whether it happens before or Sauce,


51:14: know we completely close to it's just like we're probably going to approach it more carefully um it's it's not going to come
45:28: it's not dependent on it in any way.


51:20: as part of custom notes though like those are independent
45:32: There is a part that is, which is having custom shader support,


51:27: uh blue oh s unfortunately late but I don't know if I can bring on it easier
45:35: which you do want to do with protoflux, that one does require switch to Sauce,


51:34: because I already have the setup uh without I'm sorry
45:41: because with Unity, the options to do custom shaders are very limited,


51:40: um uh J widen is asking internally how prepared is the engine to take full
45:47: and very kind of hacky.


51:45: advantage of uh modern net past the jit uh there's been lots of things since framework like spans to avoid allock and
45:50: So, that one will probably wait, but for the parts I was talking about earlier,


51:51: unsafe methods think bit casting that can make things way way way faster are areas where we use the new features in
45:56: those will happen regardless of when Sauce comes in.


51:58: the Headless client through pre-processor directives or something so um we use like a number of features uh
45:59: It might happen after Sauce comes in, it might happen before it comes in,


52:05: that are sort of backported to all the versions like you know like you mentioned SPS um using you know stle locations and
46:03: but this is just purely how the timing ends up working out,


52:12: so on and we expect those you know to get like a performance uplift uh with more than run time because those are
46:07: and how the prioritization ends up working out.


52:18: specifically optimized for it um so there's like parts of the engine like especially anything
46:10: Next question, ShadowX, I'm just checking time.


52:24: newer like uh we've been trying to like you know use the more modern mechanisms
46:14: ShadowX, with nested nodes, will custom nodes be able to auto-update


52:29: were possible uh there are bits where we cannot really use the mechanisms or if
46:17: when source template for Scrib node is changed?


52:35: we did it would actually be detrimental right now um there's like certain things
46:20: Yes, there are multiple ways to interpret this as well,


52:40: like uh like for example uh vector's Library um that one with with modern net
46:24: but if you have a template, and you have it used in lots of places,


52:47: it runs way faster but if we use it right now we would actually run way slower because uh with the with uh with
46:28: if you change the internals of the node, every single instance is going to be reflected.


52:56: unit is mono it it just it doesn't run well like there's there's certain things which um
46:33: So you can actually have it used in lots of objects in the scene,


53:03: if we did right now like we essentially end up hurting performance until we make the switch so we tend to like use a
46:40: and you need to change something about its internals,


53:08: different approach that's not as optimal for the more modern net but it's more optimal now um so there might be like
46:42: everything is going to be reflected in the scene.


53:15: you know some places like like in COD you can see like that but where possible we try to use the modern mechanisms
46:45: The other interpretation is if you make a library of nodes,


53:21: there's also some things which we cannot use just because they don't exist you know with the version of net and there's
46:51: and say you reference that in your world,


53:26: like know way to backp like for example using you know SMI the intrinsics you know to accelerate lot of the math um
46:54: and the author of the library publishes an updated version,


53:34: that's that's just not supported on all the versions and there's no way you know to like backport it so we cannot really
46:57: is that going to auto-update other worlds, which do use that library?


53:39: use those mechanisms um so once we do make the switch uh what we expect like pretty
47:04: That would be handled by the Molecule system,


53:47: substantial performance Improvement uh but part of like why we want to do the switch especially as one of the first
47:08: which is our planned system for versioning,


53:54: task you know towards like uh performance is because it'll let us use all the
47:12: and we want to use it not just for Resonite itself,


53:59: mechanisms you know going forward so when we build new systems we can you
47:15: but also for ProtoFlux, so you can publish your library functions and so on.


54:05: know take full advantage of you know all the mechanisms that modern net offers
47:20: And with that, what we do is let you define rules on when to auto-update and what not.


54:10: and we can over time also like you know upgrade the old ones to get even more performance out of it um so like overall
47:27: We probably follow something like semantic versioning,


54:16: I kind of expect you know big performance uh even for the parts that are not like most prepared for it just
47:30: so if it's a minor update, it auto-updates unless you disable that as well.


54:21: because the code generation quality is way higher uh um but
47:36: If it's a major update, it's not going to auto-update unless you specifically ask it to.


54:29: um it's um like like we can essentially like you
47:42: So that's going to be the other part of it.


54:36: know like take them time like you know to like get even more performance by switching some more approaches the more
47:44: That one's definitely going to give you more of a choice.


54:43: like more performant ones so by making the switch then on itself it's going to
47:51: Next question, TroyBorg.


54:48: you know B big performance boost but that's not the end of it it also you know opens doors for doing even more
47:53: So could I have something like BubbleNode that has all the code floating around randomly,


54:54: like following that uh next question Z just just Z can those
47:58: the random lifetime on it?


55:00: splots work within the engine they are in meeses right could they be rigged uh so Splats you can have to implement the
48:01: I'm not really sure what you mean, BubbleNode,


55:06: support for it yourself uh the output is in a mesh like the idea of gasan
48:03: but pretty much you can package all the code for whatever you want the bubble to do in that node,


55:12: splatting is you're essentially using a different type of primitive you know to represent your you
48:10: and it doesn't need to be, for example, my bubbles.


55:19: know I was I don't want to say mesh because you know it's it's not a mesh um
48:16: Oh, that didn't work.


55:24: entes your mod you know your scene or whatever you know you're want to show um
48:20: There we go, like this one.


55:30: so instead of a triangle you know vertices and triangles that we have with traditional geometry it's completely
48:21: You see, I have this bubble, and this bubble, it has code on it that handles it flying around,


55:35: composed from Gans which are a different type of primitive um you can actually
48:32: and right now, when I make the bubble, it actually duplicates all the code for the bubble,


55:42: use meshes to render those um from what I looked there's like multiple ways you
48:36: which means the ProtoFlux VM needs to compile all this code.


55:47: to do the rendering um one of the um like one of the um approaches for
48:40: It's relatively simple, so it's not as much, but still, it adds up,


55:57: it is like you know you essentially encode like the Splat data and then you just render this you use the typical
48:44: especially if you had hundreds of these, or thousands.


56:03: like you know GP rization pipeline uh to render them as quads you know and then the Shader does like the sampling of
48:49: With the nested nodes, all this bubble will need to do is reference that template,


56:10: like spherical harmonics for like uh you know so can change color based on the like angle you look at it from and other
48:54: and only need one of it, which means it's just going to reuse the same compiled instance of the ParticleNode


56:16: stuff there's other approaches that um implement the actual rization in compute
49:01: instead of duplicating literally the entire thing on each of the objects you make independently.


56:22: Shader uh and that can like lead to like more efficient ones and at that point like you know you're not using traditional geometry uh but you know the
49:11: Next, a question from Toiberg.


56:29: approach is kind of vary there's like lot of way still like Implement them next question troyborg uh so is
49:16: For processing textures, you could do stuff like level curves adjustments,


56:36: that something you need special software to create him or is it just something in reso so it knows how to render that new
49:21: like in Blender Classroom, the texture for Albedo, they adjust it with levels,


56:41: format for 3 scan um we essentially need like a code to like you know render them
49:25: and grayscale then plug into a heightmap instead of separate texture.


56:46: the main like the gist of it is um the way you approach it is you get your your
49:32: I cannot fully understand how to map this question to Resonite,


56:52: data set you know your gas and splat it's essentially a bunch of like is essentially points with lots of external
49:36: because I'm not familiar with Blender Classroom,


56:58: data you have like you know the size of it uh and then the color is en encoded using something called spherical
49:40: but you could define your own procedural texture and then use it in other stuff.


57:05: harmonics and that's essentially like a mathematical way to like efficiently encode
49:46: The procedural texture, it will end up as actual bitmap.


57:11: information uh on the surface of a sphere which means like you can kind of sample it based on the direction like
49:49: It's going to run your code to generate the texture data,


57:17: where if if you consider you know um if you consider a sphere I should have
49:53: upload it to the GPU, and at that point it's just the normal texture.


57:22: grabbed my brush I I I'm going to grab a new one because I'm too LA to go over
50:00: But you're able to do stuff like that, or at least look similar.


57:28: there um brushes uh let's see I'm not going to
50:03: Next question, Dusty Sprinkles. I'm also just checking how many of these.


57:35: like over there because it's just a simple doodle but um say you have like you know this is sphere in you know 2D
50:07: Fair bit of questions. Dusty Sprinkles is asking,


57:43: it's a circle um and if it's like unit sphere then each point on the sphere is essentially a
50:13: When we get custom nodes, do you think we'll be able to create our own default node UI?


57:49: Direction so if we have information encoded you know on the surface of your sphere then each point like if if I'm
50:18: I could see using AudioDSP for custom nodes to make DOS.


57:56: like you know if I'm the Observer here so this is you know my eye and I'm looking at this then the direction is
50:23: So the custom UI for the ProtoFlux nodes that's completely independent from custom nodes.


58:03: essentially the point from the center of the sphere towards the Observer I use this direction to sample the function
50:31: It's something we could also offer, but it's pretty much a completely separate feature.


58:10: and I get a unique color for this particle direction if I look at it from this direction you know
50:37: Because the generation of the node is technically also the main ProtoFlux itself.


58:18: then then I get this direction and I sample the color here and this color can be different than this color and this is
50:43: It's the UI to interface with the ProtoFlux.


58:25: a way that the Gin plus they're really good at encoding stuff like you know Reflections for example because with
50:47: We could add mechanisms to be able to do custom nodes.


58:31: reflection you literally the color on the point like on the point it literally changes based on the angle you look at
50:51: There are some parts of that that I'm a little bit careful with,


58:38: it from um so and it's the C it's the uh sorry
50:55: because usually you can have hundreds or thousands of nodes,


58:46: it's the spherical harmonics that actually take the bulk of the data for gasan splot because uh um from what I've
50:59: and having customizable systems can end up like a performance concern.


58:53: seen they use third order Spa harmonics which means for each point you actually have 16 colors which is quite a lot uh
51:05: Because the customization, depending on how it's done,


59:00: and a lot of the work is like you know how do you compress that in a way that
51:09: it can add a certain amount of overhead.


59:06: the GPU can decode like very fast on the Fly you know so it doesn't eat all your
51:12: But it's not something we're completely close to,


59:12: vrm uh but yeah essentially you've read your code you know to answer the question more rally you've read your
51:15: it's just like we're probably going to approach it more carefully.


59:17: code you know to like encode it properly and then like you know render it as efficiently as you can um and you can kind of utilize you
51:19: It's not going to come as part of custom nodes, though.


59:25: know some of the ex thing like rization pipeline as well to kind of save you some time um Z zit is asking I don't
51:22: Those are independent.


59:32: have a good understanding of splots but aren't they essentially particles um so I kind of like maneuver
51:27: BlueCyro, oh, Cyro unfortunately is late,


59:37: this a few times like there're not particles in the sense of particle system there's like some overlap because each each split is is a point but it it
51:30: but I don't know if I can bring out it easier,


59:46: has a lot of the additional data to it and it's also not you know a tiny small particle but like it can be like varus
51:34: because I already have this set up without, I'm sorry.


59:51: sized you know color blob uh next question does the sprinkles so
51:41: Jay Wyden is asking internally how prepared is the engine


59:58: can you make spots with any 3D scans I don't really get them but they're neat uh so the data set I use for mine uh
51:45: to take full advantage of modern net pass to JIT.


1:00:05: it's essentially just photos it's the same approach I use for traditional um
51:48: There's been lots of things since framework, like spans,


1:00:12: um uh like um um let me
51:50: to avoid ologs and unsafe methods, think bitcasting,


1:00:19: think like there's there's like different ways to like make them but the
51:53: that can make things way, way, way faster.


1:00:25: most common one that seen is like you know you L just take lots of photos you use traditional approach you know the
51:56: Other areas where we use the new features in the headless client


1:00:30: photos get aligned in a space um and like then then you sort of like you know
51:59: through pre-processed directives or something.


1:00:37: estimate like the depth ex like the traditional you know 3D construction except for a Splats it doesn't estimate
52:02: We do use a number of features that are backported to older versions,


1:00:43: the depth the way I've seen it done in the software I use is it starts with like a a sparse Point Cloud that's made
52:08: like you mentioned spans, using stack allocations and so on,


1:00:50: from the tie points from the photos essentially like points in space that are like shared between the photos and
52:13: and we expect those to get a performance uplift


1:00:57: it generates you know Splats from those uh and the way it kind of does it is uh
52:17: with more than runtime because those are specifically optimized for it.


1:01:02: it um I believe it uses like gradient descent which is a form of like machine learning where each of the Splats is
52:21: So there's parts of the engine, especially anything newer,


1:01:09: actually taught how how it should look so it matches your input images um so
52:25: like we've been trying to use the more modern mechanisms where possible.


1:01:15: that's usually the longest part of your construction process because it has to like go through a lot of like know training and if you use the software I
52:31: There are bits where we cannot really use the mechanisms,


1:01:22: use it's called jaet post post shot um um it
52:34: or if we did, it would actually be detrimental right now.


1:01:29: um um it essentially like you know it runs like usually like several dozen
52:38: There's certain things like, for example, vectors library.


1:01:36: thousands like training steps and you can usually in the begin you can kind of see like the Splats are very fuzzy and
52:45: That one, with modern net, it runs way faster,


1:01:42: it's just kind of moving around and they're sort of like settling into space and getting more detail and it also like
52:48: but if we used it right now, we would actually run way slower


1:01:48: adds more Splats you know in between them where it needs to add more detail so there like a there's like you know whole kind of
52:52: because with the unit as mono, it just doesn't run well.


1:01:55: training process it I do actually have a video I can show you um because there's also like relevant question I can see
52:59: There's certain things which, if we did right now,


1:02:03: um so I'm going to uh uh because Shadow X is asking does
53:04: would essentially end up hurting performance until we make the switch,


1:02:10: all common uh common spling sofware encode spherical harmonics I never noticed color changes in my scans in a
53:08: but we tend to use a different approach that's not as optimal for the more modern net,


1:02:16: scaners and pulse shot so I know for sure pulse shot does it I don't know about skuniverse because I don't use
53:12: but is more optimal now.


1:02:21: that one it's possible they simplify it because like I've seen some implementations of G in Splats they just
53:15: So there may be some places, like in code, you can see that,


1:02:27: throw away the spherical harmonics and C a single color which saves ton of space but you also lose you know one of the
53:18: but where possible, we try to use the modern mechanisms.


1:02:33: big benefits of them um but I can tell you uh posture
53:21: There's also some things which we cannot use just because they don't exist


1:02:40: definitely does it and I have a video that showcases that pretty well um so
53:25: with the version of net, and there's no way to backport them.


1:02:47: this is uh this is kind of figment that I did for f and I like reprocessed this
53:27: For example, using SMI, the Intrinsics, to accelerate a lot of the math.


1:02:53: with gasi spting and watch watch like you know the reflections on you know on the surface of the statue
53:34: That's just not supported under all the versions, and there's no way to backport it,


1:03:00: you can see how they change based on where I'm looking from it's kind of subtle uh if I look at the top there
53:38: so we cannot really use those mechanisms.


1:03:08: actually another interesting thing and I have another video for this uh but gassian splatting is very
53:43: Once we do make the switch, we expect a pretty substantial performance improvement,


1:03:14: useful if you have like good coverage from like opposite angles because the way the scanning process works they like
53:49: but part of why we want to do the switch, especially as one of the first tasks towards performance,


1:03:21: like I mentioned earlier they they are trained to reproduce your input IM as
53:56: is because it'll let us use all the mechanisms going forward.


1:03:26: close as possible which means for all the areas all the you know area where you have photo coverage from they look
54:02: If we build new systems, we can take full advantage of all the mechanisms that modern net offers,


1:03:33: generally great but if you like move too far away from them like in this case for example from the top I was not able to
54:09: and we can, over time, also upgrade the old ones to get even more performance out of it.


1:03:40: take any pictures you know from the top of it it actually it actually kind of like they start going a bit funky do you
54:16: Overall, I can expect big performance, even for the parts that are not most prepared for it,


1:03:47: see like do you see like how it's kind of like all kind of fuzzy and the color is blotchy um
54:21: just because the code generation quality is way higher,


1:03:55: and it's the that's um that's kind of like for one it shows you know like it
54:32: but we can essentially take the time to get even more performance by switching some of the approaches,


1:04:02: does encode you know the color based on the direction uh but it also shows like one of the downsides of it uh because I
54:42: the more performance ones.


1:04:08: have another video here um so this is a scan of um this is like um I don't even
54:45: By making the switch, then all itself is going to be a big performance boost, but that's not the end of it.


1:04:16: know how long ago this was like over six years ago um uh when um uh like my family from
54:50: It also opens doors for doing even more, like following that.


1:04:26: Ukraine they were like visiting over because my grandma she was um Ukrainian uh and they made
55:05: Splats, you kind of have to implement the support for yourself.


1:04:32: borch uh which is like know kind of tradition like foods and I I wanted to scan it but I Lally didn't have time
55:09: The output is in a mesh.


1:04:39: because they put on desk I was like I'm going to scan it and I was only able to take three photos before they started
55:11: The idea of Gaussian splatting is you're essentially using a different type of primitive to represent your...


1:04:44: moving things around uh but it actually made for an interesting scan because I was like how much can I get out of three
55:20: I don't want to say mesh, because it's not a mesh.


1:04:51: photos and in the first part this is traditional scan with a mesh surface
55:24: It essentially represents your model, your scene or whatever you're going to show.


1:04:57: that's done with you know with ag of meta shape oh I switch to the other one so you see all the reflections
55:30: Instead of a triangle, vertices and triangles that we have in traditional geometry,


1:05:03: they're kind of like you know baked in it doesn't actually look like metal anymore uh there's you know lots of
55:34: it's completely composed from Gaussians, which are a different type of primitive.


1:05:09: parts missing because literally they were not scanned but the software was
55:41: You can actually use meshes to render those.


1:05:15: able to estimate the surface it knows this is a straight surface if I look at it from the angle apart from the missing
55:45: From what I've looked, there's multiple ways to do the rendering.


1:05:22: parts it's still coherent it still holds shape with Gan sping it doesn't
55:51: One of the approaches for it is you essentially include the splat data,


1:05:29: necessarily reconstruct the actual shape it's just trying to look correct from angles and you'll be able to see that in
56:01: and then you just render this. You use the typical GPU rasterization pipeline to render them as quads,


1:05:35: a moment uh so this is the gasin split and you see it looks correct and the moment I move it it just
56:08: and then the shader does the sampling of spherical harmonics so it can change color based on the angle you look at it from and other stuff.


1:05:42: disintegrates like you see it's just a jumble of you know colorful points and it's because all the views that I had
56:16: There's other approaches that implement the actual rasterization in compute shader,


1:05:49: they're like relatively close to each other and for those views it looks correct because it was strength to look
56:23: and this can lead to more efficient ones, and at that point you're not using traditional geometry,


1:05:55: correct but because there's no cameras you know from the other angles the cians are free to do you know just whatever
56:29: but the approach is kind of varied, there's lots of ways to implement them.


1:06:02: like they're they're they don't have anything to constrain them you know to look particle way so it just ends up a
56:34: Next question, Troy Borg. Is that something you need special software to create them,


1:06:07: jumble and that's a very kind of to me that's very kind of interesting way to
56:38: or is it just something in Resonite so it knows how to render that new format for a 3D scan?


1:06:13: visualize the differences between the scanning techniques but yeah just kind of along
56:43: We essentially need a code to render them. The gist of it, the way you approach it,


1:06:19: with it like answer to yes they do encode the spherical harmonics and you can make it like you
56:51: is you get your data set, your Gaussian splat, it's essentially points with lots of extra data.


1:06:24: know pretty much with n SC but the quality of the scan is going to depend
56:59: You have the size of it, and then the color is encoded using something called spherical harmonics.


1:06:29: you know on your data set and I've been kind of throwing because I have like terabytes of like you know 3D scans that
57:06: That's essentially a mathematical way to efficiently encode information on the surface of a sphere,


1:06:35: be just throwing everything you know at the software and seeing what it like ends up producing I know there's also other ways
57:13: which means you can sample it based on the direction.


1:06:42: there's like you know some software and Alo just double checking time uh there's also you know some software um that um
57:17: If you consider a sphere... I should've grabbed my brush.


1:06:51: just generates it like you know with AI and stuff but like I don't know super much about that so there's like other
57:24: I'm gonna grab a new one, because I'm too lazy to go over there.


1:06:57: ways to do them um but I'm mostly familiar with the one you know with um
57:33: Brushes... let's see... I'm not gonna go over there, because it's just a simple doodle.


1:07:02: I'm mostly familiar with um you know like using photos as the data
57:38: Say you have this sphere in 2D, it's a circle.


1:07:08: set uh next question uh J ven 4 prolex is a VM it compiles things um so
57:45: And if it's a unit sphere, then each point on the sphere is essentially a direction.


1:07:15: technically VM and compiling those are like two separate things also Epicon is asking what is a perlex VN so I'm going
57:50: So if you have information encoded on the surface of your sphere, then each point...


1:07:21: to just combine those questions into one um
57:56: If I'm the observer here, so this is my eye, and I'm looking at this,


1:07:27: so yes um it it is a VM which essentially means it's sort of like you know it has a defined sort of like run
58:02: then the direction is essentially the point from the center of the sphere towards the observer.


1:07:33: time like it's it's a technically stack based VM um it's
58:08: I use this direction to sample the function, and I get a unique color for this particle direction.


1:07:39: a um how do I put it it's essentially sort like in an environment where the code of the nodes you know it knows it
58:13: If I look at it from this direction, then I get this direction, and I sample the color here,


1:07:46: knows how to work with the particle environment um that sort of isolates it
58:22: and this color can be different than this color.


1:07:51: you know from everything else it's sort of like you know like a runtime that's able to run the code it sort of compiles
58:24: And this is a way that the Gaussians, plus they were really good at encoding stuff like reflections for example,


1:07:57: thing it's sort of like a halfway step it doesn't it doesn't directly produce
58:31: because with reflection, the color on the point, it literally changes based on the angle you look at it from.


1:08:03: machine code from actual code the actual code you know of the individual nodes that ends up being
58:43: And it's the spherical harmonics that actually take the bulk of the data for a Gaussian spot,


1:08:09: machine node for the node Itself by the way it can operat with the VM um what
58:49: because from what I've seen, they use third-order spherical harmonics,


1:08:15: prolex does it builds uh something called execution list and evaluation lists so if you have like you know a
58:55: which means for each point you actually have 16 colors, which is quite a lot.


1:08:21: sequence of notes or sequence of impulses it's going to look at it and be like okay this executes then this
59:00: And a lot of the work is how do you compress that in a way that the GPU can decode very fast on the fly,


1:08:26: executes then this executes and builds a list and then during execution it already has the pre-built list as well
59:09: and eat all your VRM.


1:08:33: as like it resolves things like you know stack allocation it's like okay this node needs to use this variable and this
59:13: But essentially, you write your code, to answer the question more radically,


1:08:38: node needs to use this variable I'm going to allocate this space on the stack and you know and I'm going to give
59:17: you write your code to encode it properly, and then render it as efficiently as you can.


1:08:43: these notes the uh the corresponding offsets so they can you know read the proper values to and from the stack um
59:24: And you can utilize some of the existing rasterization pipelines as well, to save you some time.


1:08:52: so it's kind of like you know do a lot of kind of like the building process it doesn't end up as full machine code so like I would say it's sort of like a
59:31: Zita Zit is asking, I don't have a good understanding of splats, but aren't they essentially particles?


1:08:58: halfway step towards compilation um eventually we might consider like you know doing sort of a
59:36: So I kind of went over this a few times, they're not particles in the sense of a particle system,


1:09:04: jit compilation where it actually makes you know full machine code for the whole
59:41: there's some overlap, because each splat is a point, but it has a lot of additional data to it,


1:09:10: thing which could help like improve the performance of it as well um but right now it's U it is a
59:47: and it's also not a tiny small particle, but it can be variously sized color blob.


1:09:16: VM that um we sort of halfway compilation step to kind of speed things
59:56: Next question, does this bring calls? So it kind of makes pause with any 3D scans, I don't really get them, but they're neat.


1:09:21: up uh it also like does it to like you know validate certain things like for example you have like you know infinite
01:00:02: So the dataset I use for mine is essentially just photos, it's the same approach I use for traditional.


1:09:27: kind of continuation Loops like certain things are essentially like illegal like
01:00:21: There's different ways to make them, but the most common one that I've seen is you usually just take lots of photos,


1:09:33: you cannot have those be a valid program uh which kind of helps avoid you know
01:00:29: you use traditional approach, the photos get aligned in a space, and then you sort of estimate the depth.


1:09:39: some everyone's like you know having some kind of issues where we have to figure out certain problems at the r
01:00:39: Like with traditional 3D construction, except for the splats, it doesn't really estimate the depth.


1:09:46: time but in short like the pro VM it's like a way for you know the prolex to
01:00:44: The way I've seen it done in the software I use is it starts with a sparse point cloud that's made from the type points from the photos,


1:09:52: essentially do its job it's like an environment execution environment um you know that defines how
01:00:52: it's essentially points in space that are shared between the photos, and it generates splats from those.


1:09:59: it look kind of works and then all the nodes can operate within that environment uh next question Nitra is
01:01:00: And the way it does it is, I believe it uses gradient descent, which is a form of machine learning,


1:10:06: asking is the current plan to move the graphical client to net n via multiprocess architecture be of the
01:01:07: where each of the splats is actually taught how it should look so it matches your input images.


1:10:12: sauce yes um so we are currently um I'm going actually just do I've done it on
01:01:15: So that's usually the longest part of your reconstruction process, because it has to go through a lot of training,


1:10:18: the first one but since I have a little bit better setup I might do it again just to get like you know better view um
01:01:23: like I said, post shot, it runs usually several dozen thousand training steps,


1:10:24: let me actually get up for this one uh I'm going to move over here there we go so I'm going to move
01:01:38: and usually in the beginning you can see the splats are very fuzzy and they're just moving around,


1:10:32: over here I already have my brush that I forgot earlier I'm going to clean up um all this
01:01:43: and they're settling into space and getting more detail, and it also adds more splats in between them where it needs to add more detail.


1:10:39: stuff um let's move this to give you like a gist of like the performance
01:01:51: So there's like a whole kind of training process to it.


1:10:46: update there we go clean all this up um make sure I'm not going to hit a
01:01:56: I actually have a video I can show you, because there's also a relevant question I can see.


1:10:52: wall grab my brush there we go so right now a little bit
01:02:04: So I'm gonna...


1:10:59: more right now um so the way like the situation is
01:02:08: Because ShadowX is asking, does all common splatting software encode spherical harmonics?


1:11:07: right now oops so imagine this is Unity I'm
01:02:14: I never noticed color changes in my scans in a scanning verse and post shot.


1:11:16: actually going to write it here unity and this is uh within the unity we
01:02:17: So I know for sure post shot does it, I don't know about scanning verse because I don't use that one.


1:11:24: have fr so this is FRS
01:02:22: It's possible they simplify it, because I've seen some implementations of Gaussian splats,


1:11:31: engine I'm just going to f
01:02:27: they just throw away the spherical harmonics and encode a single color, which saves tons of space,


1:11:37: and um so unit you know it has its own stuff just you know like whatever unit
01:02:31: but you also lose one of the big benefits of them.


1:11:43: it has and then with FRS engine most things in FRS engine they're actually
01:02:37: But I can tell you, post shot definitely does it, and I have a video that showcases that pretty well.


1:11:49: fully contained you know within FRS engine so there's like lots of systems just going thr them as little boxes and
01:02:47: So this is a kind of figment that I did for sure, and I reprocessed this with Gaussian splatting,


1:11:55: they kind of like fully contained Unity has no idea they even exist uh but then there's right now
01:02:55: and watch the reflections on the surface of the statue.


1:12:03: there's two systems uh which are sort of shared between the two um there's a
01:03:00: You can see how they change based on where I'm looking from, it's kind of subtle.


1:12:10: particle system and there's the audio
01:03:05: If I look at the top, there's actually another interesting thing, and I have another video for this,


1:12:22: system so those two systems uh they're essentially a hybrid where F
01:03:10: but Gaussian splatting is very useful if you have good coverage from all possible angles,


1:12:28: engine doesn't work and unity doesn't work and they're very kind of inter
01:03:18: because the way the scanning process works, like I mentioned earlier,


1:12:34: there's another part when for engine communicates with unity there's you know another bits
01:03:22: they are trained to reproduce your input images as close as possible,


1:12:41: there's also like lots of like you know little kind of connections between things that kind of you know tie the two
01:03:27: which means for all the areas where you have photo coverage from, they look generally great.


1:12:49: together um and the problem is Unity uses something called mon
01:03:34: But if you move too far away from them, like in this case for example from the top,


1:12:55: which is a runtime it's also actually like a VM you know like the prolex VM but different but essentially it's
01:03:38: I was not able to take any pictures from the top of it, it actually kind of like,


1:13:02: responsible for taking our code and running it you know translating into instructions for your CPU providing you
01:03:45: they start going a bit funky. Do you see how it's all kind of fuzzy and the color is blotchy?


1:13:09: know allo um like kind of Base Library like you know um implementations and so
01:03:56: And that's kind of like, for one it shows, it does encode the color based on the direction,


1:13:15: on and the problem is the version that you need to use this it's very old and
01:04:05: but it also shows one of the downsides of it, because I have another video here.


1:13:20: it's very slow and because like all of the for engion is kind of running inside of it um that makes it uh you know that
01:04:12: So this is a scan of, this is like, I don't even know how long ago, this was like over six years ago,


1:13:30: makes all of this like slow as well so what a plan is in order you know
01:04:24: when my family from Ukraine, they were visiting over because my grandma, she was Ukrainian,


1:13:36: to get a big big performance update is uh first we need to like simplify you
01:04:31: and they made borscht, which is like another kind of traditional kind of like foods,


1:13:42: know we need to disentangle the few bits of for extension from Unity as much as
01:04:36: and I wanted to scan it, but I literally didn't have time because they put it on disk,


1:13:48: possible uh the part I've been working on you know um is the particle system uh
01:04:40: I was like, I'm gonna scan it, and I was only able to take three photos before they started moving things around.


1:13:53: that one's very close I think we'll probably start it like probably start this thing next week uh it's called
01:04:46: But it actually made for an interesting scan, because I was like, how much can I get out of three photos?


1:14:00: Photon dust that's our new in-house particle system and the reason we're
01:04:52: And in the first part, this is traditional scan with a mesh surface that's done with, you know,


1:14:05: doing it is so we can actually you know take this whole bit oh oh
01:04:58: with I guess a metal shape. Oh, I already switched to the other one.


1:14:11: no I might I might just redraw it I wanted to make a nice visual part but uh
01:05:02: So you see, all the reflections, they're kind of like, you know, baked in.


1:14:17: it's not cooperating just going to do
01:05:05: It doesn't actually look like metal anymore.


1:14:22: this and then I'll do this and just you know particle system audio system so
01:05:09: There's, you know, lots of parts missing because literally they were not scanned,


1:14:27: what we do we essentially replace this with this we make it fully contained
01:05:13: but the software was able to estimate the surface.


1:14:33: inside of FR engine uh once that is done we're going to do the same for audio engine so it's going to be also fully
01:05:18: It knows this is a straight surface. If I look at it from an angle,


1:14:39: contained here which means we don't have you know ties here and then this part
01:05:21: apart from the missing parts, it's still coherent.


1:14:45: instead of like you know lots of little kind of like wires we're going to rework this so all the
01:05:24: It still holds shape.


1:14:52: communication uh with unity happens via like a very nicely defined sort of
01:05:27: With Gaussian Splathing, it doesn't necessarily reconstruct the actual shape.


1:14:58: package where it like you know sends the data and like then the system you know
01:05:32: It's just trying to look correct from angles, and you'll be able to see that in a moment.


1:15:03: it'll do like you know whatever here but the tie to Unity is now you know greatly
01:05:37: So this is the Gaussian Splat, and you see it looks correct,


1:15:09: simplified it essentially sends all the stuff you know that needs to be rendered you know and some stuff that kind needs
01:05:40: and the moment I move it, it just disintegrates.


1:15:15: to come back is sent over a very well defined interface that can be um
01:05:43: Like, you see, it's just a jumble of, you know, colorful points,


1:15:20: communicated over you know some kind of like interprocess communication mechanism probably combination of like
01:05:46: and it's because all the views that I had, they're like relatively close to each other,


1:15:27: um uh a shared memory and some like you know pipe
01:05:51: and for those views, it looks correct because it was trying to look correct,


1:15:32: mechanism once this is done what we will be able to do we could actually take FRS
01:05:56: and because there's no cameras, you know, from the other angles,


1:15:37: engine and take this whole thing out if I kind of grab the whole thing it's being unwieldy uh just just pretend this
01:05:59: the Gaussians are free to do, you know, just whatever.


1:15:45: is smoother than it is they'll take it out into its own process and because we
01:06:02: Like, they don't have anything to constrain them, you know, to look particle way,


1:15:53: now control that process instead of you know having being AB with unity we can use net
01:06:06: so it just ends up a jumble.


1:16:02: n and this part like this is majority of like you know where time is spent
01:06:08: And that's a very kind of, to me, that's a very kind of interesting way


1:16:09: running except you know when it comes to rendering which is Unity part which means because we'll be able to run with
01:06:13: to visualize the differences between the scanning techniques.


1:16:15: net 9 um we'll get a huge performance boost
01:06:18: But yeah, that's kind of what we did, like answer to, yes, they do encode the spherical harmonics,


1:16:21: and the way we know we're going to get like you know significant performance boost is because we've already done this
01:06:23: and you can make it like, you know, pretty much with any scans,


1:16:26: for a headless client that was the first part you know of this performance work is move the Headless client to use net 8
01:06:25: but the quality of the scan is going to depend, you know, on your data set.


1:16:34: which now is net 9 because they released a new version um the reason we wanted to do
01:06:31: And I'll be kind of throwing, because I have like, terabytes of like, you know, 3D scans,


1:16:40: headless first is because headless already exists outside of unity it's not tied to it so it was much easier to do
01:06:35: I'll be just throwing everything, you know, the software and seeing what it like ends up producing.


1:16:47: this for headless you know than doing this for the graphical client and headless it pretty much shares most of
01:06:41: I know there's also other ways, there's like, you know, some software,


1:16:53: this most of the code that's like you know doing the heavy processing is in the Headless same as you know on the
01:06:44: let's just double check in time, there's also, you know, some software that just generates it,


1:16:59: graphical client uh when we made the switch and we had Community start
01:06:52: like, you know, with AI and stuff, but like, I don't know super much about that.


1:17:04: hosting events with the net 8 headlight we noticed a huge performance boosts
01:06:56: So there's like other ways to do them, but I'm mostly familiar with the one, you know, with,


1:17:09: there's been like sessions like for example the uh Grand Oasis karoke um I remember like they they used
01:07:02: I'm mostly familiar with, you know, like using photos as a dataset.


1:17:17: to their headless used to struggle when it was getting around you know 25 people on the FPS of the Headless would be
01:07:09: Next question, jviden4, ProtoFlux is a VM, it compiles things.


1:17:24: dropping you know the session will be degrading with the net 8 they've being
01:07:15: So, technically, VM and compiling, those are like two separate things.


1:17:29: able to host session which had I think at a peak like 44 users and all the users all the ik you
01:07:19: Also, Epic is asking what is a ProtoFlux VM, so I'm going to just combine those two questions into one.


1:17:36: know things that like all the dynamic bones all the you know perto flux everything you know that's being
01:07:27: So yes, it is a VM, which essentially means it's sort of like, you know, it has a defined sort of like runtime.


1:17:41: computed on graphical client it was being computed on headless minus obviously you know rendering stuff and
01:07:33: Like it's a technically stack-based VM.


1:17:49: the Headless was able to maintain 60 frames per second with 44 users
01:07:38: It's a, how do I put it, it's essentially sort of like an environment where the code of the nodes, you know,


1:17:55: which um you know that's at Le like an order of Mag magnitude kind of
01:07:45: it knows how to work with a particle environment that sort of isolates it, you know, from everything else.


1:18:00: improvement over you know running with mono um so doing it for headless first
01:07:55: It sort of compiles things, it's sort of like a halfway step.


1:18:08: that sort of let us you know gauge how much of a performance Improvement will
01:07:59: It doesn't, it doesn't directly produce machine code from actual code.


1:18:13: the switch make and whether it's worth it you know do the separation as early as possible and based on the data like
01:08:06: The actual code, you know, of the individual nodes that ends up being a machine node for the node itself,


1:18:22: it's pretty much like you know a feel like it's very very worth it and this why we've been kind of you know focusing
01:08:11: by the way, it kind of operates with the VM.


1:18:29: on making this happen making you know this like you know do this kind of thing where you can pull F engine out of unity
01:08:15: What ProtoFlux does, it builds something called execution lists and evaluation lists.


1:18:35: around evident n and then the communication you know will just do instead of like you know the
01:08:20: It's kind of like, you know, a sequence of nodes or sequence of impulses.


1:18:40: communication happening within the process it's going to pretty much happen the same way except across you know
01:08:23: It's going to look at it and be like, okay, this executes, then this executes, then this executes,


1:18:47: process boundary the other benefit of this uh is
01:08:27: and builds a list, and then during execution it already has the pre-built list,


1:18:55: you know how do we align this you know because we still even when we do this once we reach this point we'll still
01:08:32: as well as, like, it resolves things like, you know, stack allocation.


1:19:00: want to get rid of unity for a number of reasons um one of those is you know being like custom shaders we those are
01:08:36: It's like, okay, this node needs to use this variable and this node needs to use this variable.


1:19:08: really really difficult to do with unity at least you know making them like real time and making them you know support
01:08:39: I'm going to allocate this space on the stack and, you know,


1:19:14: like backwards compatibility making sure the content doesn't break stuff like that um being able to use more efficient
01:08:42: and I'm going to give these nodes the corresponding offsets so they can, you know,


1:19:20: rendering methods like instead of you know having to rely on deferred um will be able to like you know use
01:08:48: get to and from the stack.


1:19:27: like cluster forward which can handle you know lots of different shaders with lots of
01:08:53: So it's kind of like, you know, it does a lot of kind of like the building process.


1:19:33: lights so we'll want to get rid of unity as well and this whole thing where the
01:08:55: It doesn't end up as full machine code.


1:19:39: communication between FRS engine which does you know all the kind of computations and then send stuff be like
01:08:57: So like, I would say it's sort of like a halfway step towards compilation.


1:19:45: please render this stuff for me um because this process makes this a lot more defined we can essentially take the
01:09:02: Eventually we might consider like, you know, doing sort of a JIT compilation where it actually


1:19:53: whole Unity just going to e it
01:09:06: makes, you know, full machine code for the whole thing,


1:19:59: away and then we'll plug in
01:09:10: which could help like improve the performance of it as well.


1:20:05: Source instead so s is going to have like you know its own things and inside like
01:09:13: But right now it's, it is a VM


1:20:11: sauce is actually going to be like you know right now it's being built on the Bevy like rending
01:09:18: that we will sort of halfway compilation step to kind of speed things up.


1:20:16: engine so I'm just going to put it there and the communication is going to happen pretty much the same way you know and
01:09:22: It also like does it like, you know, validate certain things.


1:20:23: this is going to do whatever so we can we can you know snip Unity out and replace it with sauce there's
01:09:25: Like, for example, you have like, you know, infinite kind of continuation loops,


1:20:30: probably going to be some minor modifications to this how it kind of communicates so we can kind of build around the new features of sauce and so
01:09:29: like certain things are essentially like illegal.


1:20:37: on but the principle of it by moving for exension out by making everything neatly
01:09:33: Like you cannot have those be a valid program,


1:20:43: comp combined making a neat you know communication method it makes the switch to Source much easier as well as the
01:09:37: which kind of helps avoid, you know, some,


1:20:49: next step um it's actually the latest thing from development of of sauce there
01:09:40: having some kind of issues where we have to figure out certain problems at runtime.


1:20:55: was actually a decision made that uh sauce is probably not going to have any c Parts at all uh it's going to be
01:09:47: But in short, like the ProtoFlux VM, it's like a way for,


1:21:01: purely rest based which means like it doesn't even need to um it doesn't need to like you know worry about net 9 or
01:09:50: you know, the ProtoFlux to essentially do its job. It's like an environment, execution environment,


1:21:09: like you know C interrup because uh its responsibility is going to be you know rendering whatever for engine sends it
01:09:57: you know, that defines how it kind of works


1:21:15: and then maybe know sending some like call like methods kind of back like where needs to be by directional
01:10:00: and then all the nodes can operate within that environment.


1:21:21: communication to like you know syn stuff up um but like you know actual world
01:10:05: Next question, Nitra is asking, is the current plan to move the graphical client to .NET 9


1:21:26: like the word model you know all the kind of like all the like you know interaction that's going to be fully
01:10:10: architecture before moving to Sauce? Yes, so we are currently,


1:21:31: contained in F engine external to Source um then on itself that's going to be a
01:10:16: I might actually just do, I've done it on the first one, but since I have a little bit better setup,


1:21:36: big upgrade uh because it's going to be much more more ring engine will be able to do you know things like the custom
01:10:21: I might do it again just to get like, you know, a better view. Let me actually get up for this one.


1:21:43: shaders like was mentioning there's some potential benefits to this as well because it um
01:10:26: I'm going to move over here. There we go.


1:21:48: the multiprocess architecture uh it's inspired by you know Chrome and Firefox
01:10:31: So I'm going to move over here. I already have my brush that I forgot earlier.


1:21:53: which do the same thing uh where your web browser is actually running you know multiple
01:10:34: I'm going to clean up all this stuff.


1:22:00: processes um one of the benefits that adds is you know sandboxing because um
01:10:41: Let's move this to give you like a gist


1:22:06: once this is kind of done we'll probably do the big move like this and at some point later in the future will split
01:10:44: of like the performance update. There we go. Clean all this up.


1:22:12: this even into more processes so each of the worlds you host you know can be its own
01:10:50: I'm not going to hit all. Grab my brush.


1:22:17: process also net 9 you know or whatever the net version is so this going to be like you know one world is going to be
01:10:55: There we go. So right now, a little bit more.


1:22:24: another world world and these will you know communicate with this and this will communicate with this and the benefit is
01:10:58: Right now.


1:22:31: like you know if a v crashes is not going to bring the entar thing down it's the same thing you know
01:11:03: So the way like the situation is right now.


1:22:37: in a web browser if you ever if you ever had your browser tab crash this is kind
01:11:12: So imagine


1:22:42: of similar principle it crashes just the tab instead of crashing the whole thing similar thing we might be able to I'm
01:11:13: this is Unity. I'm actually going to write it here.


1:22:49: not promising this right now but we might be able to design this in a way where if the renderer crashes
01:11:20: And this is within the Unity,


1:22:55: well just real on [ __ ] you'll still stay in the world that you're in your visuals are just going to know go away for a bit
01:11:23: we have FrooxEngine.


1:23:00: and then going to come back so we can like you know reboot this whole part without bringing this whole thing down
01:11:28: So this is FrooxEngine.


1:23:05: and of course if this part comes down you know then it's over but then you have to restart uh but by splitting into
01:11:34: So Unity, you know,


1:23:13: more modules you kind of you know you essentially eliminate the possibility of
01:11:39: it has its own stuff.


1:23:19: crashing because this part will eventually be doing relatively little it's just going to be know Cor in the
01:11:43: And with FrooxEngine, most things in FrooxEngine,


1:23:24: different processes but for the for the first part we're just going to move you know for extension into separate process out of
01:11:47: they're actually fully contained within FrooxEngine. So there's like lots of systems


1:23:32: unity that's going to give us big benefit thanks to that n um there's other benefits because for example unit
01:11:52: just going to throw them as little boxes. And they're all kind of like fully contained. Unity has no idea


1:23:39: unit is garbage collector is very slow uh and very CPU heavy with net 9 has way
01:11:58: that they even exist. But then there's,


1:23:45: more performant one as well we'll be able to utilize you know new performance benefits of like net 9 like in the code
01:12:02: right now there's two systems which are sort of shared between the two.


1:23:52: itself because it will be able to start you know using new functions with INF engine because now it like you know we
01:12:09: There's a particle system.


1:23:57: now we don't have to worry about what Unity supports um following that uh the next
01:12:15: And then there's the audio system.


1:24:04: big step is probably going to be switch to Sauce so we're going to you know replace Unity with sauce and at some
01:12:23: So those two systems, they're essentially a hybrid.


1:24:10: point in the future we'll do like more splitting um you know for fruit engine into more separate processes to improve
01:12:28: Where FrooxEngine doesn't work and Unity doesn't work. And they're very kind of


1:24:16: stability and also at sandboxing because once you can of do this you can sandbox
01:12:32: intertwined. There's another part when FrooxEngine communicates with


1:24:21: this whole process using you know the operating system soundbox in Primitives which will
01:12:37: Unity, there's other bits. There's also lots of


1:24:27: improve security um so that's kind of like you know the general kind of over plan what
01:12:42: little connections between things


1:24:33: you want to do you know with the architecture of the whole system and it's been like heavily like I've been
01:12:46: that tie the two together.


1:24:38: like reading a lot you know how Chrome and Firefox did it and Firefox actually did a similar thing where they used to
01:12:51: And the problem is, Unity uses something called Mono, which is a runtime,


1:24:44: be like a monolithic process and then they started like you know doing work to break it down into less processes and
01:12:57: it's also actually like a VM, like the ProtoFlux VM but different. But essentially it's responsible


1:24:50: eventually did you know just two processes and then it kind of broke it down into even more
01:13:02: for taking our code and running it. Translating it into instructions for


1:24:55: and we're essentially going to be doing similar thing there so I hope this um
01:13:07: CPU, providing all the base library


1:25:00: this kind of an anwers it gives you better idea of what we want to do with
01:13:13: implementations and so on. And the problem is,


1:25:06: performance uh you know for forite and what you know what what are the major
01:13:17: the version that Unity uses, it's very old and it's very slow.


1:25:11: steps uh that we need to take and also explains why we are actually re working
01:13:22: And because all of the FrooxEngine is running inside of it,


1:25:16: in the particle system and audio system because on the surface it might seem you
01:13:27: that makes all of this slow as well.


1:25:22: know there's like why we re working the particle and audio system when we want to know more
01:13:34: So what the plan is, in order to get a big performance update,


1:25:28: performance um and the reason is you know just so we can kind of show them fully into F engine make them kind of
01:13:39: is first we need to simplify, we need to


1:25:34: mostly independent like you know of unity and then we can pull the F ex engine out and that's the major reason
01:13:43: disentangle the few bits of Froox Engine from Unity as much as possible.


1:25:41: we're doing it the other part is you know so we have our own system that we kind of control because once we also
01:13:49: The part I've been working on is the particle system.


1:25:47: switch Unity for Source if the particle system was still in unity Source would have to re reimplement it and it also
01:13:56: I'll probably start this thing next week. It's called PhotonDust,


1:25:53: complicate this whole part because like no now we have to like synchronize this particle system with
01:14:01: it's our new in-house particle system, and the reason we're doing it


1:26:00: all the details of the particle system on this end um so that's that's another benefit uh
01:14:06: is so we can actually take this whole bit...


1:26:08: but there's also some actual performance benefit even just from the new particle
01:14:11: I might just redraw it. I wanted to make a nice visual part


1:26:13: system uh because uh the new particle system is designed to be a synchronous
01:14:16: but it's not cooperating.


1:26:19: which means if you do something really heavy you're only going to see the particle system like and you will not
01:14:20: I'm just going to do this,


1:26:24: like as much because um the particle system if it's not if if it doesn't finish its computations within you know
01:14:23: and then I'll do this, and just particle system, audio system.


1:26:31: specific time is just going to skip and you know render the previous state and the paral system itself will
01:14:27: So what we do, we essentially replace this with this. We make it fully contained


1:26:39: like but you will't like as much so that should uh help improve your overall frame rate as well so that's pretty much
01:14:33: inside of Froox Engine. Once that is done, we're going to do the same for the audio engine,


1:26:46: the just of it um the partical system is almost done um we'll probably you know
01:14:38: so it's going to be also fully contained here, which means we don't have ties here.


1:26:52: start testing like this upcoming week uh the audio system that's going to be the next thing after that it's going to
01:14:42: And then this part, instead of lots of little wires,


1:26:57: be know interface with unity once that is done then the pool happens into the separate process which is going to be a
01:14:48: we're going to rework this. So all the communication


1:27:04: relatively simple process at that point because every everything everything will be in place you know for it pull out
01:14:52: with Unity happens via a very nicely defined


1:27:11: from unit to happen so hopefully this kind of you know uh gives you much
01:14:57: package, where it sends the data, and then the system


1:27:16: better idea and if like any questions about it you know feel free to ask we always happy to kind clarify how this is
01:15:02: will do whatever here. But the tie to Unity is now


1:27:22: going to work um I'm going to go big boink there we go uh I'm going to
01:15:12: rendered, and some stuff that needs to come back is sent over a very well-defined


1:27:34: down there we go so there was that was another of those kind of rid hard questions I kind of do this explanation
01:15:17: interface that can be communicated over some kind of


1:27:41: on my first episode but uh the I kind of wanted to do it because I have like a little bit better setup you know with
01:15:22: inter-process communication mechanism. Probably a combination of


1:27:47: the right things so we can make also like a clip out of it so people so we have something to refer people to but
01:15:27: a shared memory and some pipe mechanism.


1:27:54: yeah uh that's that's the answer to n's question I'm also checking time I have like about 30 minutes left um so let's
01:15:33: Once this is done, what we'll be able to do, we could actually take Froox Engine


1:28:02: see do we have there's a few questions but I think I can get through them uh this actually kind of working
01:15:37: and take this whole thing out, if I can't even grab the whole thing, it's being unwieldy.


1:28:07: out I've been like worried a little bit because I'm taking a while to answer some of the questions and going on tangents but uh um uh seems to kind of
01:15:44: Just pretend this is smoother than it is.


1:28:16: work out with the questions we have so next uh Shadow X does all common
01:15:48: They'll take it out into its own process.


1:28:21: spotting software and oh I already asked sorry I already answered that one um J V for
01:15:52: And because we now control that process, instead of being abandoned with Unity,


1:28:29: uh so VM is kind of like optimization layer rather than something AK to C or Chrome V8 so it has the fundamentals of
01:15:57: we can use .NET 9.


1:28:34: a VM but goal is to just a ver I know ahead of time what needs to be run pipeline quickly it's I mean it's it's
01:16:03: And this part, this is majority


1:28:40: the same general Principle as like you know as the as the CLR or chrom's V8 VM
01:16:06: of where time is spent running, except when it comes to rendering,


1:28:46: is essentially it's just an environment in which the code can exist and which in which the code operates and the way the
01:16:11: which is the Unity part. Which means, because we'll be able to run with .NET 9,


1:28:53: VM you know runs that code can differ some VMS you know they can be purely
01:16:17: we'll get a huge performance boost.


1:28:58: interpreted you literally you know maybe you just have a switch statement there is just switching based on instruction
01:16:21: And the way we know we're going to get a significant performance boost is because we've already done this


1:29:03: and doing things maybe it does you know some kind of like compiles into some some sort of as and then you know
01:16:25: for our headless client. That was the first part of this performance work,


1:29:10: evaluate stad or maybe it takes it and actually emits you know machine code for whatever architecture you're running on
01:16:31: is move the headless client to use .NET 8, which now is .NET 9 because they're released in version.


1:29:16: there's lots of different ways for VM to execute your code um so the way protox
01:16:38: The reason we wanted to do headless first


1:29:22: executes code and the way or V8 execute code is different I think actually V8
01:16:40: is because headless already exists outside of Unity, it's not tied to it.


1:29:29: like I think it do like a hybrid where it like it kind of converts some parts into
01:16:45: So it was much easier to do this for headless than doing this for a graphical client.


1:29:34: machine code and like some it kind of interprets but it doesn't interpret the original like you know types code
01:16:50: And headless, it pretty much shares most of this.


1:29:40: interprets like some of the you know uh abstract syntax three um I don't fully
01:16:54: Most of the code that's doing the heavy processing is in the headless, same as on the graphical client.


1:29:46: remember like the details but I think like V like does a hybrid where we can actually have kind of both Seer that
01:17:01: When we made the switch and we had the community start hosting events


1:29:55: yeah S I think always translates it to machine code but one thing they did introduce uh with the latest like
01:17:05: with the .NET 8 headless, we noticed a huge performance boost.


1:30:01: versions is they have multi- tier rigid compilation so one of the things they do
01:17:09: There's been sessions, like for example the Grand Oasis karaoke.


1:30:08: is like when you code runs they willit compile it into machine code which is actually you know native code for your
01:17:15: I remember their headless used to struggle.


1:30:15: CPU and um they they just run it like they should compile it like you know
01:17:19: When it was getting around 25 people, the FPS of the headless would be dropping down


1:30:21: fast because you want to you want you don't want to waiting you know for the application to actually run but that
01:17:25: and the session would be degrading.


1:30:27: means they cannot do as many optimizations what they do though is like when they like when theit compiler
01:17:27: With the .NET 8, they've been able to host sessions which had, I think at the peak, 44 users.


1:30:33: makes that code that's like you know done in a very quick way so it's not as optimal it has like a counter each time
01:17:34: And all the users, all of their IK, all the dynamic bolts, all the ProtoFlux,


1:30:39: like you know method is called and if it crosses a certain threshold you know say like the method gets called more than 30
01:17:40: everything that's been computed on graphical client, it was being computed on headless,


1:30:45: times it's going to trigger the jit compiler to compile much more optimized
01:17:46: minus obviously rendering stuff.


1:30:51: version but like it goes a really heavy you know on the optimizations to make much more
01:17:49: And the headless was able to maintain 60 frames per second with 44 users.


1:30:57: faster code which is going to take it some time but also in the meanwhile as long as it's doing it it can still keep
01:17:55: Which is like an order of magnitude improvement over running with mono.


1:31:03: running you know the slow code that was legit compiled once the legit compiler is ready it just swaps it out for the
01:18:06: So, doing it for headless first, that sort of let us gauge how much of a performance improvement will this switch make.


1:31:09: more optimized version um and uh you
01:18:14: And whether it's worth it, do the separation as early as possible.


1:31:14: know and and at that point your code actually speeds up so we have code that's like you know being called very
01:18:19: And based on the data, it's pretty much like, I feel like it's very, very worth it.


1:31:20: often you know like the main game Loop for example it ends up compiling in it in very optimal way if you have
01:18:32: Where you can pull Froox Engine out of Unity, run it with a 9.


1:31:26: something that runs just once like for example some initialization method you know like when you start up the engine
01:18:37: And then the communication will just do, instead of the communication happening within the process,


1:31:31: there's some initialization that only runs once it doesn't need to do heavy optimizations on it because that we just
01:18:43: it's going to pretty much happen the same way except across process boundary.


1:31:37: waste of time it speeds up the startup time and it's kind of optimizes for both and I think I think the latest
01:18:52: The other benefit of this is, how do we align this?


1:31:44: version um it actually added um I forget the term for it they used but it's a
01:18:56: Because even when we do this, once we reach this point, we'll still want to get rid of Unity for a number of reasons.


1:31:51: it's essentially like in know multi-stage compilation where they look at what are the common you know
01:19:05: One of those is being like custom shaders. Those are really, really difficult to do with Unity,


1:31:57: Arguments for particle method and then assume those arguments are constants and
01:19:10: at least making them real-time and making them support backwards compatibility,


1:32:03: it will compile a special version of the method you know with those arguments as constants which lets you optimize even
01:19:15: making sure the content doesn't break, stuff like that.


1:32:09: more because now we don't have to worry can this argument you know be different values and you have to do all the math it can precompute all that math that is
01:19:18: Being able to use more efficient rendering methods.


1:32:16: dependent on that argument ahead of time so it actually runs much faster and if we have a method that is often times
01:19:21: Instead of having to rely on the third, we'll be able to use cluster forward,


1:32:23: called with um very specific arguments it now runs much faster and there
01:19:29: which can handle lots of different shaders with lots of lines.


1:32:29: actually another jit another VM that did this called the Lua jit which is um like
01:19:34: We'll want to get rid of Unity as well, and this whole thing,


1:32:35: like runtime for the laa language and what was really cool about that one
01:19:38: where the communication between FrooxEngine, which does all the computations,


1:32:42: is like um even so Lua you know it's just considered this kind of scripting language Lua jit it was able to
01:19:44: and then sends stuff like, please render this stuff for me.


1:32:49: outperform languages like C and C++ in some benchmarks because
01:19:48: Because this process makes this a little more defined,


1:32:54: with CN C++ all of the code is compiled ahead of time so like you you don't know
01:19:51: we can essentially take the whole Unity, just gonna yeet it away,


1:33:00: actually you know what kind of arguments you're getting what L was able to do is like be like okay this value is always
01:20:00: and then we'll plug in Sauce instead.


1:33:07: an integer and or maybe this value is always integer that's you know number
01:20:08: So Sauce is gonna have its own things, and inside Sauce is actually gonna be,


1:33:12: 42 um so like I'm just going to compile a method that assumes this is
01:20:12: right now it's being built on the Bevy rendering engine.


1:33:18: 42 and it makes like super optimal version of the you know method when when and that one rounds
01:20:17: So I'm just gonna put it there, and the communication is gonna happen


1:33:25: like you know it's even more optimized you know than the CN C++ because CN C++ cannot make those
01:20:20: pretty much the same way, and this is gonna do whatever.


1:33:31: assumptions there's like some like you know I know there ex is like you know there like profiling compilers where
01:20:25: So we can snip Unity out, and replace it with Sauce.


1:33:37: actually run your code and you know they will try to also figure it out and then you compile your code um you know with
01:20:30: There's probably gonna be some minor modifications to this,


1:33:43: those kind of profiling optimizations and can do some of that too but um it
01:20:32: how it communicates, so we can build around the new features of Sauce and so on.


1:33:48: kind of shows you know like it like there's some benefits you know to the compilers where they can be kind of
01:20:37: But the principle of it, by moving FrooxEngine out,


1:33:55: more adaptive and they kind of they can kind of you know do it for free because you don't you don't have to
01:20:41: by making everything neatly combined, making a neat communication method,


1:34:01: like um you don't have to like you know do it as a part of your development process once you kind upgrad in your
01:20:49: is the next step.


1:34:07: system it just kind of you get all these kind of benefits and it's and it's able to run
01:20:52: There's actually the latest thing from the development of Sauce,


1:34:13: your code that's the same exact code as you had before it's able to run it much faster because it's a lot smarter how it
01:20:55: there was actually a decision made that Sauce is probably not gonna have


1:34:20: converts it into machine code um so next question is Nitra is asking uh
01:20:58: any C-Sharp parts at all, it's gonna be purely Rust-based,


1:34:28: how about video players are they already full inside for extension as well no for video players they actually exist more
01:21:02: which means it doesn't even need to worry about .NET 9 or C-Sharp Interop,


1:34:34: on the unity side and they are pretty much going to remain majorly on the
01:21:10: because its responsibility is gonna be rendering whatever FrooxEngine sends it,


1:34:39: unity side um the reason for that is because the video player it has a very
01:21:15: and then maybe sending some methods back,


1:34:44: tight integration with the unity engine because it needs to update you know the GPU textures with the decoded video data
01:21:19: where it needs to be by the actual communication to sync stuff up.


1:34:52: um it has a bit like you know mechanism because we will need to send the audio data back for the F engine to like
01:21:24: But all the actual world model, all the interaction that's gonna be fully contained


1:34:57: process and send back um so that's going to be um like like
01:21:31: in FrooxEngine external to Sauce.


1:35:05: video players is essentially going to be considered like you know as an asset because even like you know stuff like textures when you load a texture you
01:21:35: Then on itself, there's gonna be a big upgrade,


1:35:11: know it needs to be uploaded to the GPU memory it needs to be sent to the render and the way I plan to approach that one
01:21:38: because it's gonna be much more modern-day engine,


1:35:18: is you know through a mechanism called shared memory where the texture data itself FRS engine will locate in a
01:21:41: we'll be able to do things like the custom shaders, like I was mentioning.


1:35:24: shared memory and then it will pass you know over the pipe it will pass it will essentially tell the renderer here's
01:21:45: There's some potential benefits to this as well,


1:35:31: shared memory here's the texture like you know information like the size format and so on please upload it to the
01:21:47: because the multi-process architecture is inspired by Chrome and Firefox,


1:35:37: memory under you know this handle for example and it assigns it you know some kind of like number to identify the
01:21:53: which do the same thing, where your web browser is actually running multiple processes.


1:35:44: texture and essentially sends that over to the you know unit engine unit engine is going to like you know read the texture data and upload to the GPU and
01:22:01: One of the benefits that adds is sandboxing,


1:35:52: it's going to send a message back to fr engine be like you know texture number you know 420 has been you know uploaded
01:22:04: because once this is kind of done, we'll probably do the big move like this,


1:35:59: to the GPU and fr engine knows okay this one's now loaded and then when it sends it please render these things it's going
01:22:09: and at some point later in the future, we'll split this even into more processes,


1:36:04: to be like okay render this thing you know with texture number 422 and it's going to you know send it
01:22:15: your host can be its own process, also .NET 9, or whatever the .NET version is.


1:36:10: as part of it package to like unity and unity will know okay I I have this texture and this number you know and
01:22:22: So this can be one world, this can be another world,


1:36:17: like it's going to S like prepare things it's going to be similar thing with video players where the playback and the
01:22:26: and these will communicate with this, and this will communicate with this.


1:36:23: coding happens on the unity side um and for ex just sends you know some like basic information to kind of update it
01:22:30: And the benefit is if a word crashes, it's not gonna bring the entire thing down.


1:36:30: you know be like you know the position should be this and this you know like you should do do and do this and this
01:22:36: It's the same thing in a web browser.


1:36:36: with a playback and it's going to be sending back you know some audio data uh but yeah those parts those parts
01:22:38: If you ever had your browser tab crash, this is a similar principle.


1:36:42: are going to remain like within within like Unity uh so quite a bit like we have
01:22:43: It crashes just the tab instead of crashing the whole thing.


1:36:49: like 25 minutes um and there's only like one question right now
01:22:47: Similar thing, we might be able to, I'm not promising this right now,


1:36:55: um uh J ven I guess something I don't really understand about reson is where
01:22:50: but we might be able to design this in a way where if the renderer crashes,


1:37:00: simulation actually happens the server simulate everything and the CLI just pull or the clients do some work and S
01:22:55: we'll just relaunch it. You'll still stay in the world that you're in,


1:37:06: to the server it's a mix for example players on ik local perlex or do all servers and clients simulate everything
01:22:58: your visuals are just gonna go away for a bit and then gonna come back.


1:37:12: local as defined perlex can get pretty confusing pretty fast um so usually it's
01:23:01: So we can reboot this whole part without bringing this whole thing down.


1:37:18: a mix but like the way uh the way like a resonite works so like fruits engine
01:23:05: And of course, if this part comes down, then it's over, and you have to restart.


1:37:24: works is by default everything's built around your data model and by default the data model
01:23:11: But by splitting into more modules, you essentially eliminate the possibility of crashing


1:37:31: is implicit and synchronized which means if you change something in the data model FRS engine will replicate it you
01:23:19: because this part will eventually be doing relatively little.


1:37:38: know to everyone else um and the way you know most things like the components and stuff works is
01:23:22: It's just gonna be coordinating the different processes.


1:37:44: the DAT data model itself um it's sort of like like an AO
01:23:26: But for the first part, we're just gonna move Froox Engine into a separate process out of Unity.


1:37:50: things you know it's like the data model says you know this this is how things should be and any state you know any any
01:23:33: That's gonna give us big benefit thanks to .NET 9.


1:37:57: state that's uh ends up you know representing something like it represents you know something out of
01:23:37: There's other benefits because, for example, Unity's garbage collector is very slow


1:38:03: visual some state of some system that's fully contained within the data model and that's the really important part the
01:23:42: and very CPU heavy, but .NET 9 has way more performance as well.


1:38:10: only thing that can be like local to the components by default is you know any caching data or any data you know that
01:23:48: We'll be able to utilize new performance benefits of .NET 9 in the code itself


1:38:18: can be deterministically computed from the data model uh if the data model changes it
01:23:52: because we'll be able to start using new functions within Froox Engine


1:38:24: doesn't matter what internal data it has the data model says you know things should be this way um and then like the whole
01:23:56: because now we don't have to worry about what Unity supports.


1:38:32: synchronization is built on top of that like the whole idea is like you know by default you don't actually have to think
01:24:01: Following that, the next big step is probably gonna be to switch to Sauce.


1:38:38: about it like you know like the data model is going to handle the synchronization for you it's going to
01:24:07: So we're gonna replace Unity with Sauce, and at some point in the future


1:38:43: resolve conflicts you know it's going to resolve like if you know say multiple people change a thing or if people you
01:24:10: we'll do more splitting for Froox Engine into more separate processes to improve stability.


1:38:49: know change a thing they're not allowed to it's going to you know resolve those data Chang
01:24:17: And also add sandboxing, because once you do this, you can sandbox this whole process


1:38:55: um and you essentially build your thing to like respond to data so if your data is you know guaranteed to be
01:24:23: using the operating system sandboxing primitives, which will improve security.


1:39:01: synchronized and conflict resolved then the behaviors that depend on the data always lead to the same you know or
01:24:30: So that's the general overall plan, what we want to do with the architecture of the whole system.


1:39:08: these conversion results um what this does it kind of gives you you know benefits to write
01:24:38: I've been reading a lot about how Chrome and Firefox did it, and Firefox actually did a similar thing


1:39:14: systems all over different ways and but like the the main thing is like you
01:24:43: where there used to be a monolithic process, and then they started doing work to break it down into less processes


1:39:21: know you don't have to worry but synchronization like it it just kind of
01:24:50: and eventually they did just two processes, and then they broke it down into even more.


1:39:26: happens automatically um and it kind of changes the problem where instead of um instead
01:24:55: And we're essentially gonna be doing a similar thing there.


1:39:35: of like uh what do I put
01:24:58: So I hope this answers it, gives you a better idea of what we want to do with performance for Resonite


1:39:41: it so instead of instead of um you know things being syn versus
01:25:08: and what are the major steps that we need to take, and also explains why we are actually reworking the particle system and audio system.


1:39:47: noning be a problem we have to think about it turns it into an optimization problem because you could have you know
01:25:18: Because on the surface, it might seem, you know, there's, like, why we're reworking the particle and audio system


1:39:53: people like Computing multiple kind of things or like Computing it on the wrong end things that can be computed from
01:25:26: when we want, you know, more performance, and the reason is, you know, just so we can kind of show them


1:39:59: other stuff and you end up wasting Network traffic as a result but for me
01:25:32: fully into Froox Engine, make them kind of mostly independent, like, you know, of Unity,


1:40:04: that's much better problem to have than things getting out of sync what we do have is like the data
01:25:37: and then we can pull the Froox Engine out, and that's the major reason we're doing it.


1:40:11: model has mechanisms to optimize that one of those mechanisms being
01:25:41: The other part is, you know, so we have our own system that we kind of control, because once we also switch Unity for Sauce,


1:40:16: drives um drives essentially tell like the drive is a way of telling the data
01:25:49: if the particle system was still in Unity, Sauce would have to reimplement it, and it would also complicate this whole part.


1:40:22: model I'm taking control you know of this part of data model don't synchronize it I am responsible for
01:25:56: Because, like, now we have to, like, synchronize this particle system with all the details of the particle system on this end.


1:40:29: making sure you know this stays you know consistent when it needs
01:26:05: So, that's another benefit. But, there's also some actual performance benefit, even just from the new particle system.


1:40:35: to um and a way to think about Drive is you know you can have something like a smooth LP node which is you know like a
01:26:15: Because the new particle system is designed to be asynchronous.


1:40:42: one of the community favorites and the way that works it actually has like its own internal computation that's not
01:26:19: Which means if you do something really heavy, you're only going to see the particle system, like, and you will not like as much.


1:40:47: synchronized whatever input to smoth Smo is is syn generally synchron IED because
01:26:25: Because the particle system, if it doesn't finish its computations within a specific time, it's just going to skip and render the previous state.


1:40:54: it kind of comes from data model but the output doesn't need to be synchronized because it's convergent so like you're
01:26:37: And the particle system itself will lag, but you won't lag as much. So that should help improve your overall framerate as well.


1:41:00: guaranteed to have the same value on the input for all users you can fully compute the output value on each user
01:26:44: So, that's pretty much the gist of it. The particle system is almost done. We'll probably start testing this upcoming week.


1:41:07: locally because it's all converging towards the same value um and as a result everybody ends
01:26:55: The audio system, that's going to be the next thing. After that, it's going to be the interface with Unity.


1:41:15: up you know with if not the same at Le very similar you know result on their
01:26:59: Once that is done, then the pull happens into the separate process, which is going to be relatively simple process at that point.


1:41:21: end it is also possible you can if you want to diverge this data model uh for
01:27:06: Because everything will be in place for the pullout from Unity to happen.


1:41:27: example value user override it does this where it has like but it kind an
01:27:13: So, hopefully this gives you a much better idea.


1:41:33: interesting way because it makes the the dares values it actually makes it part of data model so like the values that
01:27:17: And if you have any questions about it, feel free to ask. We're always happy to clarify how this is going to work.


1:41:39: each user supposed to get is still synchronized and like everybody knows like this user should be getting this
01:27:24: I'm going to go back. Boink. There we go.


1:41:45: value but the actual value that you drive it it you know that's diers for
01:27:31: We're going to sit down. There we go.


1:41:51: each user and it's like a mechan mechm built on this principle you know to handle this kind of scenario you can
01:27:36: So that was another of those kind of rabbit hole questions. I kind of did this explanation on my first episode.


1:41:58: also you know like sometimes dve things from things that going to be different on each user and have each user see a
01:27:43: But I kind of wanted to do it because I have an ultimate better setup with the right things.


1:42:04: different thing you can diverge it the main point is um it's a deliberate
01:27:48: So we can also make a clip out of it so we have something to refer people to.


1:42:10: choice at least it should be in most cases unless you know you do it by accident but we'll you know try to make
01:27:54: But that's the answer to Nitro's question. I'm also checking the time. I have about 30 minutes left.


1:42:15: um make it harder to do it by accident uh because if you if you're like you know thinking I'm going to
01:27:59: So let's see. There's a few questions but I think we can get through them.


1:42:21: specifically make this thing this thing you know it's like it's much less likely you know
01:28:06: It's actually kind of working out. I've been worried a little bit because I'm taking a while to answer some of the questions and going on tangents.


1:42:28: to happen by accident you know like either by bug or like misbehavior the systems designed in a way to make sure
01:28:13: But it seems to kind of work out with the questions we have.


1:42:34: that everybody shares you know the same general experience
01:28:18: So next, ShadowX. Does all common spotting software encode... Oh, I already answered that one.


1:42:40: so generally like uh if if you for example considered ik like you mention
01:28:27: So VM is kind of like optimization layer rather than something akin to CLR or Chrome V8.


1:42:46: like ik um AK like the actual computations you know like of the bones
01:28:33: So it has the fundamentals of a VM but the goal is just to resign ahead of time what needs to be run by playing quickly.


1:42:52: that's computed locally and the reason for that is because the inputs uh the inputs to the ik is you know the data
01:28:39: It's the same general principle as the CLR or Chrome's V8.


1:42:58: model itself which is synchronized and the realtime values are you know like hand positions if you have you know
01:28:45: VM is essentially just an environment in which the code can exist and in which the code operates.


1:43:04: tracking it's feet positions head position those are synchronized and
01:28:51: And the way the VM runs that code can differ. Some VMs can be purely interpreted.


1:43:09: because uh each user gets the same input to the ik the ik on everybody's end ends
01:28:59: You literally now maybe just have a switch statement that is just switching based on instruction and doing things.


1:43:15: up Computing you know the same or very similar result therefore the actual AK
01:29:05: Maybe it compiles into some sort of AST and then evaluates that.


1:43:20: itself doesn't need to be synchronized because um you know it's uh the final
01:29:11: Or maybe it takes it and actually emits machine code for whatever architecture you're running on.


1:43:26: positions like they are driven and essentially that is a way of the ik saying you know if you give me the same
01:29:16: There's lots of different ways for VM to execute your code.


1:43:33: inputs I'm going to produce you know I'm going to drive these bone positions to match those inputs in a somewhat you
01:29:20: So the way ProtoFlux executes code and the way CLR or V8 executes code is different.


1:43:39: know mostly deterministic way so that doesn't need to be you also mentioned
01:29:28: I think actually V8 does a hybrid where it converts some parts into machine code and some it interprets.


1:43:46: local prot flux uh with local prot flux um there is actually a way for you to
01:29:37: But it doesn't interpret the original typed code. It interprets some of the abstract syntax.


1:43:51: like hold some data that's outside of the data model so locals and stores they
01:29:43: I don't fully remember the details but I think V8 does a hybrid where it can actually have both.


1:43:57: are not synchronized if you drive something from those um it's going to
01:29:51: CLR I think always translates it to machine code but one thing they did introduce with the latest versions is they have multi-tiered JIT compilation.


1:44:02: diverge unless you unless you take the responsibility of computing that local in a way that's either convergent or
01:30:05: One of the things they do is when your code runs they will JIT compile it into machine code which is actually native code for a CPU.


1:44:08: like in or deterministic um so locals and Source they're not going to give you syn
01:30:16: And they JIT compile it fast because you don't want to be waiting for the application to actually run.


1:44:15: mechanism one thing that's missing right now what I want to do to prevent you know Divergence by accident is uh have
01:30:26: But that means they cannot do as many optimizations.


1:44:21: like a localness analysis so if you if you have like a a bunch of protox and you try to drive
01:30:28: What they do though is when the JIT compiler makes that code that's done in a very quick way so it's not as optimal, it has a counter each time a method is called.


1:44:28: something and it's going to check is there any is that the source of this
01:30:41: And if it crosses a certain threshold, say the method gets called more than 30 times, it's going to trigger the JIT compiler to compile a much more optimized version.


1:44:34: value is there anything that is local and if it finds it is going to
01:30:51: It goes really heavy on the optimizations to make much more faster code which is going to take it some time.


1:44:39: give you a warning you're trying to drive something from a local value unless unless you make sure that value
01:31:00: But also in the meanwhile, as long as it's doing it, it can still keep running the slow code that was already JIT compiled.


1:44:46: is you know somehow like unless the results you know of this computation stay synchronized you know even if this
01:31:06: Once the JIT compiler is ready, it just swaps it out for the more optimized version and at that point your code actually speeds up.


1:44:52: value differs uh or if you make unless you make sure that the local value is you know the computed the same for every
01:31:17: So we have code that's being called very often, like the main game loop for example, it ends up compiling it in a very optimal way.


1:45:00: user or it's very similar this is going to diverge and that will make it you
01:31:26: You have something that runs just once, like for example some initialization method.


1:45:06: know much more deliberate choice where you're like okay like I'm no not doing this by accident I really want to drive
01:31:30: Like when you start up the engine, there's some initialization that only runs once.


1:45:12: something from a local value I'm taking responsibility of making sure this will much for users or if you have a reason
01:31:34: It doesn't need to do heavy optimizations on it because they're just a waste of time.


1:45:19: to like you know diverge things for each user is a deliberative choice you're saying I want this this part of data
01:31:38: It speeds up the startup time and it kind of optimizes for both.


1:45:26: model to be diverged um so that kind of like you
01:31:41: And I think the latest version, it actually added, I forget the term for it they used,


1:45:31: know answers the question there's like a fair bit like uh I can do to this but the just of it the local imperor FL like
01:31:50: but it's essentially like in a multi-stage compilation where they look at what are the common arguments for a particle method


1:45:39: it literally means like this is not part of the data model this is not synchronized for you this is a mechanism
01:31:59: and then assume those arguments are constants and they will compile a special version of the method with those arguments as constants,


1:45:46: you use you know to store data outside of the data model and if you feed it back into the data model you sort of
01:32:07: which lets you optimize even more because now you don't have to worry can this argument be different values and you have to do all the math.


1:45:52: need to come you know responsibility um for making sure it's either you know convergent or
01:32:14: It can pre-compute all that math that is dependent on that argument ahead of time so it actually runs much faster.


1:45:59: intentionally Divergent um the other part is uh this also applies if you drive something
01:32:21: And if we have a method that is oftentimes called with very specific arguments, it now runs much faster.


1:46:06: because if you if you use the local and then you write the value and the value is not driven the final data the final
01:32:28: And there's actually another VM that did this called the LuaJIT, which is like runtime for the Lua language.


1:46:12: value right into the data model ends up implicitly synchronized and you don't have the problem so this only applies if
01:32:39: And what was really cool about that one is like, even for Lua, it's just considered this kind of scripting language.


1:46:19: you're driving something as well so so I hope that kind of like you know helps uh understand this a bit
01:32:47: LuaJIT was able to outperform languages like C and C++ in some benchmarks.


1:46:25: better uh just about 14 minutes left uh there's like one question uh I'll see
01:32:53: Because with C and C++, all of the code is compiled ahead of time.


1:46:31: how long this kind of takes to answer but I at this point uh um I might not be
01:32:59: So you don't actually know what kind of arguments you're getting.


1:46:36: able to answer like all the questions if more prop up uh but feel free to ask them I'll try to answer as many as
01:33:02: What LuaJIT was able to do is be like, okay, this value is always an integer.


1:46:41: possible uh until I get you know at full two hours um so let's see uh oie is asking
01:33:08: Or maybe this value is always an integer that's number 42.


1:46:50: you mentioned before one thing to do cing as a dependent after particles is this something you want to do um I kind
01:33:14: So I'm just going to compile a method that assumes this is 42.


1:46:57: of it is an optimization that's sort of like you know independent from the whole move uh that I feel could be fast enough
01:33:19: And it makes a super optimal version of the method.


1:47:05: I still haven't like decided like I'm kind of thinking about like you know slotting in in before the audio as well
01:33:23: And that runs. It's even more optimized than the C and C++.


1:47:10: as part of the performance optimizations because uh that one will help
01:33:28: Because C and C++ cannot make those assumptions.


1:47:16: particularly with like memory usage CPU usage like during loading things uh you
01:33:32: There's some profiling compilers that actually run your code.


1:47:21: know for example when people like you know load into the world and you get kind of like that loading you know like
01:33:38: And they will try to also figure it out.


1:47:26: that kind of lag as the staff loads in um the cascading as said like uh dependencies they can partic help with
01:33:40: And then you compile your code with those profiling optimizations.


1:47:33: those cases when the users have a lot of stuff that's only visible to them but
01:33:45: And can do some of that too.


1:47:39: you know not everyone else because right now you will still load it um there's um there also like other
01:33:47: But it shows there's some benefits to the JIT compilers where they can be more adaptive.


1:47:46: parts like if you have like Wars that are cold you know or maybe the users are cold you load all of them at once and
01:33:57: And they can do it for free.


1:47:51: it's just you know this kind of big chunk with this system it would kind of more spread out um the part I'm like not
01:33:59: Because you don't have to do it as a PowerView development process.


1:47:58: certain about like is whether it's worth doing this now or doing this after like you know make the net n switch because
01:34:05: Because once you upgrade it in your system, you get all these benefits.


1:48:04: uh it is going to be beneficial on both so it's like it's one of those optimizations that's smaller and it's
01:34:11: And it's able to run your code that's the same exact code as you had before.


1:48:10: kind of independent you know of the big move to net nine and it could provide
01:34:17: It's able to run it much faster because it's a lot smarter how it converts it into machine code.


1:48:15: benefit you know even now even before you know moved on at nine and it will still provide benefit afterwards so I'm
01:34:24: So next question is, Nitra is asking, how about video players?


1:48:23: not 100% decided um on this one I'll have to like evaluate it a little bit um
01:34:29: Are they already full inside Froox Engine as well?


1:48:28: and evaluate you know how other things are kind of going it's something I want to do but uh no hard decision
01:34:31: No, for video players they actually exist more on the Unity side.


1:48:36: yet uh next uh climber is asking could you use the red highlighting from broken
01:34:36: And they are pretty much going to remain majorly on the Unity side.


1:48:42: perlex in a different color for local computation I don't really understand
01:34:41: The reason for that is because the video player has a very tight integration with the Unity engine


1:48:48: the question I'm sorry like if Pro is like red and it's broken like
01:34:46: because it needs to update the GPU textures with the decoded video data.


1:48:53: you generally like you know you you need to fix whatever is broken before you start using it again usually like if
01:34:52: It adds a bit to the mechanism because we will need to send the audio data back for the Froox Engine to process and send back.


1:48:59: it's red it's like there's something wrong I cannot run at all
01:35:01: So that's going to be...


1:49:04: so like I don't think I should be like you know using it anyway like if if it's
01:35:05: Video players are essentially going to be considered as an asset, because even stuff like textures.


1:49:09: red like you need to fix whatever the issue is
01:35:10: When you load a texture, it needs to be uploaded to the GPU memory, it needs to be sent to the renderer.


1:49:15: um next uh ner is asking are there any plans to open sore certain parts of the
01:35:15: And the way I plan to approach that one is through a mechanism called shared memory,


1:49:21: code for example on the components and perlex know so that committee can contribute to those so uh there's some
01:35:20: where the texture data itself, Froox Engine will allocate in a shared memory,


1:49:28: plans uh nothing's fully formalized yet um we've kind of had like some
01:35:24: and then it will pass over the pipe, it will essentially tell the renderer,


1:49:33: discussions about it so um I'm not going to go like too much super into details
01:35:30: here's shared memory, here's the texture information, the size, format, and so on,


1:49:39: but my general approach is like uh we would essentially do kind of gradual
01:35:36: please upload it to the memory under this handle, for example.


1:49:44: open sourcing you know of certain parts especially ones that could really benefit from Community
01:35:39: And it assigns it some kind of number to identify the texture.


1:49:49: contributions um one example a I can give you is you know uh for example the
01:35:45: And essentially sends that over to the unit engine, the unit engine is going to read the texture data,


1:49:55: Importer and exporter system where um and also like the device driver like
01:35:50: around the upload to the GPU, and it's going to send a message back to Froox Engine,


1:50:01: system where it's very what's the term like ripe for like
01:35:54: be like, you know, texture number 420 has been uploaded to the GPU.


1:50:07: open sourcing but essentially say you know this is the model importer this is you know uh volume importer and so on
01:35:59: And Froox Engine knows, OK, this one's now loaded.


1:50:14: this is our you know and this as you know this is support for this format this format um and we cannot do this you
01:36:02: And then when it sends it, please render these things, it's going to be like,


1:50:21: know like we we make this like uh we make we make the code you know open
01:36:05: OK, render this thing with texture number 422.


1:50:26: which will all for Community contributions so people could you know contribute stuff like you know fixes for like formats or some things importing
01:36:09: And it's going to send it as part of its package to Unity, and the unit will know,


1:50:32: wrong or alternatively you know you want to add support for some obscure format that we wouldn't support our like you
01:36:13: OK, I have this texture and this number, and it's going to prepare things.


1:50:39: know ourselves because it's you know you're modding some kind of game or something and you want to like you know
01:36:20: It's going to be similar thing with video players, where the playback and decoding


1:50:45: mess with things um you know you now you can use the implementation that we
01:36:23: happens on the Unity side, and Froox Engine just sends some basic information


1:50:51: provided as a reference how another you know importer exporter um there's uh or like if you
01:36:28: to update it, be like, the position should be this and this,


1:50:59: want to like you know you need to like you know very specific fixes that are relevant to project you're working on
01:36:34: you should do this and this with the playback, and it's going to be sending back some audio data.


1:51:04: just make a fork of one of the you know one of ours or even communities and like you know modify for the purposes and you
01:36:39: But yeah, those parts are going to remain within Unity.


1:51:11: make changes that wouldn't make sense to have like you know in the default one but like that are useful to you so this
01:36:48: We have like 25 minutes, and there's only one question right now.


1:51:18: probably you know the kind of model I would want to like follow at least like initially we open source like things like partially uh where it kind of Mak
01:36:56: Chief item, I guess something I don't really understand about Resonite


1:51:25: sense uh but it's like also easy to do because like open sourcing that can be kind of complicated process like if you
01:36:59: is where simulation actually happens.


1:51:32: want it for everything because you know there's pieces of code that have like certain licensing you know and we need
01:37:02: Does the server simulate everything and the clients just pull,


1:51:38: to make sure like it's all compatible with the licensing to make sure you know everything's kind of audited and cleaned
01:37:04: or do the clients do some work and send it to the server? It's a mix.


1:51:43: up um so doing it like you know by chunks doing just like some systems I feel it's much more easier approachable
01:37:07: For example, players on IK, Local, ProtoFlux.


1:51:51: way to start and we can you know kind of build from there the other part of this
01:37:10: Or do all servers and clients simulate everything?


1:51:56: is when you do open S something you generally need like maintainers um and
01:37:12: Local, as defined in ProtoFlux, can get pretty confusing pretty fast.


1:52:03: right now like we don't really have like super good process you know for um you
01:37:17: So usually it's a mix.


1:52:08: know handling kind of community contributions for these things um so that's something I feel we also need to
01:37:19: But the way Resonite works, or FrooxEngine works,


1:52:14: heavily improve and that means like you know we kind of need to have prepared some manpower to like you know look at
01:37:25: is by default, everything is built around your data model.


1:52:20: Community like P request make sure we have have a good kind of communication there and make that whole process kind of run smoothly and been like you know
01:37:29: And by default, the data model is implicitly synchronized.


1:52:27: some PRS that like piled up against you know like some of our projects uh like
01:37:33: Which means if you change something in the data model,


1:52:33: some of our open source parts and that we haven't really had chance to prare look at because everything has been kind
01:37:35: FrooxEngine will replicate it to everyone else.


1:52:38: of busy so I'm a little bit like hited to do it like you know now at least
01:37:41: And the way most things, like components and stuff works,


1:52:44: until like you know we clear up some more things and we have like better process so it is also like you know part of the consideration there but overall
01:37:44: is the data model itself, it's sort of like an author on things.


1:52:52: it is something I'd want to do like I I feel like you know like um you as the community like like people are doing
01:37:51: It's like the data model says, this is how things should be.


1:52:58: lots of cool things and tools you know like the moding community they do lots of really neat things um and you know
01:37:55: And any state that ends up representing something,


1:53:06: doing the gradual kind of open sourcing I feel this a good way to empower you more you know to give you kind of more
01:38:01: like it represents something out of visual, some state of some system,


1:53:12: control over these things give you more control to fix some things because there's also like Parts you know like where we're small theme um so sometimes
01:38:05: that's fully contained within the data model.


1:53:21: like a is very limited to fix like n issues and if you give people you know the power to help contribute those fixes
01:38:08: And that's the really important part.


1:53:29: um I feel like overall you know the platform and the community can benefit from those uh as well as giving you a
01:38:09: The only thing that can be local to the components by default


1:53:34: ways to like you know um motive like because part of like resonite philosophy
01:38:13: is any caching data, or any data that can be deterministically


1:53:40: is you know giving people as much control as possible making the experience what you want it to be and I
01:38:20: computed from the data model.


1:53:47: feel you know by doing this you know if if you really don't like you know do may do something like or how like you know
01:38:23: So if the data model changes, it doesn't matter what internal data it has,


1:53:53: resonant handles certain things you can kind of fork that part and um you know
01:38:26: the data model says things should be this way.


1:53:59: make your own version of it like you know or fix up the issues you know you're not as dependent on us um as you
01:38:32: And then the whole synchronization is built on top of that.


1:54:05: otherwise would have been um the flip side to that is like and and part that
01:38:35: The whole idea is by default, you don't actually have to think about it.


1:54:10: I'm like usually kind of worried about is we want to also do it in a way that
01:38:39: The data model is going to handle the synchronization for you.


1:54:16: doesn't result in the platform you know fragment where everybody ends up you know on a different version of the build
01:38:43: It's going to resolve conflicts.


1:54:23: and then like you know you don't have this kind of shared Community anymore because I feel especially at this stage
01:38:44: It's going to resolve if multiple people change a thing,


1:54:29: you know that can end up hurting the platform um if it happens especially like too
01:38:48: or if people change a thing they're not allowed to,


1:54:35: early and and it's also like one of the reasons you know I was kind of thinking going with uh importers and exporters
01:38:50: it's going to resolve those data changes.


1:54:42: first because those will pretty much they cannot you know cause fragmentation
01:38:55: And you essentially build your chain to respond to data.


1:54:47: because they do not fragment the data model they just change how things are bur into or out of the data model um but
01:38:58: So if your data is guaranteed to be synchronized and conflict resolved,


1:54:56: you know the actual behavior is like it it doesn't make you incompatible in know other clients you can have like you know
01:39:03: then the behaviors that depend on the data always lead to the same,


1:55:01: importers for data formats that only you support um and still exist you know with
01:39:07: or at least convergent, results.


1:55:07: the same users and be able to you know join the some sessions um there pretty much kind of
01:39:11: What this does, it kind of gives you benefits to write systems in all different ways.


1:55:12: you know kind of on the whole like kind of op sourcing kind of thing uh and this the reason I kind of want to approach it
01:39:18: But the main thing is, you don't have to worry about synchronization.


1:55:18: this way is like you know take baby steps there see how it will see how like you know how everybody kind of responds
01:39:25: It just kind of happens automatically.


1:55:24: so say see how we you know are able to kind of handle this and then kind of you know be like okay like we're comfortable
01:39:29: And it kind of changes the problem where instead of...


1:55:31: you know with this we can now you know take a step further open source more parts you know um and just kind of you
01:39:34: Instead of like...


1:55:37: know making it a gradual process then like you know just big flip of a switch do that make
01:39:42: Instead of things being synced versus non-synced being a problem,


1:55:45: sense uh so we have about 4 minutes left
01:39:49: it turns it into an optimization problem.


1:55:50: uh there's no more questions so like I might um I don't if like this an a for Ramble because right now um
01:39:52: Because you could have people computing multiple kinds of things,


1:56:00: uh I don't know like what with ramble about uh because um I do like more gring
01:39:56: or computing it on the wrong end, things that can be computed from other stuff.


1:56:07: plus I've already gramble Bas plus a fair bit if there's any like Las Min question I'll try to answer it but I
01:40:00: And you end up wasting network traffic as a result.


1:56:12: might also just end up like a few minutes early um and as I end up rambling about rambling which is some
01:40:04: But for me, that's a much better problem to have than things getting out of sync.


1:56:19: sort of like you know metha rambling which I'm kind of doing right now I don't know what's I'm actually kind of
01:40:10: What we do have is the data model has mechanisms to optimize that.


1:56:25: curious like like everybody like um v v spting is that something you would like
01:40:14: One of those mechanisms being drives.


1:56:31: to see like you like to like you know play with uh especially if you can like you know bring stuff like this I can show
01:40:18: Drives essentially tell...


1:56:39: you uh I don't have like too many videos of those I have like one more no wait oh
01:40:20: The drive is a way of telling the data model,


1:56:47: I do have actually one video that I want to show uh I do need to
01:40:22: I'm taking control of this part of the data model.


1:56:53: fetch this one from YouTube because I haven't like imported this one in uh let's
01:40:26: Don't synchronize it.


1:57:00: see oh also like um actually let me just do this first before I start doing too
01:40:27: I am responsible for making sure this stays consistent when it needs to.


1:57:05: many things at once there we go uh so I'm going to bring this one
01:40:37: And the way to think about drives is you can have something like a SmoothLARP node.


1:57:16: in what once it loads as one's from YouTube
01:40:41: Which is one of the community favorites.


1:57:21: so it's like actually a bit worse quality uh but this is another gas ins spot that I well I have vogs but I need
01:40:44: And the way that works, it actually has its own internal computation that's not synchronized.


1:57:27: to make videos of more um this one I found like super neat because like this
01:40:48: Whatever inputs the SmoothLARP is, is generally synchronized.


1:57:32: a pretty complex scene this is from GSR at blfc um you can see like it captures a
01:40:53: Because it comes from a data model, but the output doesn't need to be synchronized


1:57:38: lot of like kind of cool detail but there's a particular part I want you to pay attention to um I'm going to point
01:40:57: because it's convergent.


1:57:45: out in a sec because one of the huge benefits of gasan plus is they're really good at not not only you know soft and
01:40:59: So you're guaranteed to have the same value on the input for all users,


1:57:51: fuz details but also you know semi-transparent stuff uh so watch this
01:41:04: you can fully compute the output value on each user locally.


1:57:58: thing do you see like these kind of plastic well know plastic but like these transparent you know bits do you see
01:41:08: Because it's all converging towards the same value.


1:58:04: these look at that it's actually able to you know represent it like really well
01:41:12: And as a result, everybody ends up with, if not the same,


1:58:09: which is something you really don't get you know with the traditional mesh based you know photogrametry
01:41:18: at least very similar result on their end.


1:58:16: um and that's actually one of the things you know like if you wanted to represent as a mesh you would kind of lose that
01:41:22: It is also possible, you can, if you want to, diverge this data model.


1:58:22: uh and that's you know why why G scenes are really good you know presenting scenes that direction photog is not um
01:41:27: For example, value, user override, it does this.


1:58:30: and I found like really neat and there have also lots of like you know splas I done um like uh this October I was like
01:41:32: But it does this in an interesting way, because it makes the divergence of values,


1:58:37: visiting in the US and I went to the USTA national park again um I have lot
01:41:36: it actually makes it part of the data model, so the values that each user is supposed to get


1:58:42: of scans from there um and like you know lot of them kind of have like you know some because there's a lot of Kaisers
01:41:41: is still synchronized, and everybody knows this user should be getting this value.


1:58:48: and everything and there's actually know there's like steam coming out of the Kaisers you know and and there's like you know water so like it's reflective
01:41:47: But the actual value that you derive with it, that's diverged for each user.


1:58:54: in some places um and I found that Gan plus that actually reconstructs pretty
01:41:51: And it's like a mechanism built on this principle to handle this kind of scenario.


1:59:00: well like it even captures you know some of the Steam and air and gives it you know more of a volume so they're like a
01:41:57: You can also sometimes derive things from things that are going to be different on each user,


1:59:06: really cool way you know of like representing those scenes and I just kind of want to be able to you know bring that in and be like you know
01:42:02: and have each user see a different thing. You can diverge it.


1:59:13: publish those and res and be like you know want you want to come see like you know bits of like you know Yellow Stone
01:42:06: The main point is, it's a deliberate choice.


1:59:18: bits of desent De you know like just go to this world and you can just view it and show it to
01:42:10: At least it should be in most cases, unless you do it by accident.


1:59:24: people um but yeah that that that actually kind of fill time nicely
01:42:13: But we'll try to make it harder to do it by accident.


1:59:30: because uh we have about 30 seconds left so uh I'm pretty much going to end it
01:42:21: If you don't specifically make this thing desync, it's much less likely to happen by accident.


1:59:35: there so thank you very much uh for everyone you know for your questions thank you very much for like you know watching and uh uh you know and
01:42:30: Either by bug or misbehaviour.


1:59:42: listening like the my ramblings and explanations and going off tangents uh I
01:42:32: The system is designed in a way to make sure that everybody shares the same general experience.


1:59:47: hope you like you enjoy this episode and um than you also very much you know for just like supporting resite
01:42:41: Generally, if you for example consider IK, like you mentioned IK.


1:59:54: whether it's like you know like on our patreon uh you know whether it's just by making lots of cool content or you know
01:42:48: IK, the actual computations of the bones, that's computed locally.


1:59:59: sharing stuff on social media like whatever whatever you do um you know it
01:42:54: And the reason for that is because the inputs to the IK is the data model itself which is synchronised.


2:00:05: it helps the platform um and like you know we appreciate it a lot so thank you very
01:43:01: And the real-time values are hand positions, if you have tracking, it's feet position, head position.


2:00:11: much and I'll see you next week by
01:43:07: Those are synchronised.


2:00:16: bye Warp
01:43:09: And because each user gets the same inputs to the IK, the IK on everybody's end ends up computing the same or very similar result.
 
01:43:19: Therefore the actual IK itself doesn't need to be synchronised.
 
01:43:23: Because the final positions are driven.
 
01:43:27: And essentially that is a way of the IK saying, if you give me the same inputs, I'm going to drive these bone positions to match those inputs in a somewhat mostly deterministic way.
 
01:43:42: So that doesn't need to be.
 
01:43:45: You also mentioned local protoflux.
 
01:43:47: With local protoflux, there is actually a way for you to hold some data that's outside of the data model.
 
01:43:54: So locals and stores, they are not synchronised.
 
01:43:58: If you drive something from those, it's going to diverge.
 
01:44:03: Unless you take the responsibility of computing that local in a way that's either convergent or deterministic.
 
01:44:11: So locals and stores, they're not going to give you a synchronous mechanism.
 
01:44:15: One thing that's missing right now, what I want to do to prevent divergence by accident, is have a localness analysis.
 
01:44:23: So if you have a bunch of protoflux and you try to drive something, it's going to check.
 
01:44:31: Is that the source of this value? Is there anything that is local?
 
01:44:38: And if it finds it, it's going to give you a warning.
 
01:44:40: You're trying to drive something from a local value.
 
01:44:42: Unless you make sure that the results of this computation stay synchronized, even if this value differs.
 
01:44:54: Or unless you make sure that the local value is computed the same for every user or is very similar, this is going to diverge.
 
01:45:02: And that will make it a much more deliberate choice.
 
01:45:08: Where you're like, OK, I'm not doing this by accident, I really want to drive something from a local value.
 
01:45:13: I'm taking the responsibility of making sure this will match for users.
 
01:45:18: Or if you have a reason to diverge things for each user, it's a deliberate choice.
 
01:45:24: You're saying, I want this part of the data model to be diverged.
 
01:45:30: So that kind of answers the question. There's a fair bit I can do to this.
 
01:45:35: But the gist of it, the local in ProtoFlux, it literally means this is not part of the data model.
 
01:45:43: This is not synchronized for you. This is a mechanism you use to store data outside of the data model.
 
01:45:49: And if you feed it back into the data model, you sort of need to kind of take responsibility for making sure it's either convergent.
 
01:45:59: Or intentionally divergent.
 
01:46:02: The other part is, this also applies if you drive something.
 
01:46:06: Because if you use the local and then you drive the value and the value is not driven,
 
01:46:10: the final value right into the data model ends up implicitly synchronized and you don't have the problem.
 
01:46:17: So this only applies if you're driving something as well.
 
01:46:22: So I hope that kind of helps understand this a fair bit better.
 
01:46:26: Just about 14 minutes left. There's still like one question.
 
01:46:30: I'll see how long this kind of takes to answer, but at this point I might not be able to answer all the questions if more pop up.
 
01:46:39: But feel free to ask them, I'll try to answer as many as possible until I get it in the full two hours.
 
01:46:47: So let's see.
 
01:46:49: Ozzy is asking, you mentioned before wanting to do cascading asset dependencies after particles, is this something you want to do?
 
01:46:56: It is an optimization that's sort of independent from the whole move, that I feel could be fast enough.
 
01:47:05: I still haven't decided, I'm kind of thinking about slotting in before the audio as well, as part of the performance optimizations,
 
01:47:12: because that one will help, particularly with memory usage, CPU usage, during loading things.
 
01:47:21: For example, when people load into the world, and you get the loading lag as the stuff loads in.
 
01:47:29: The cascading asset dependencies, they can particle hell with those cases when the users have a lot of stuff that's only visible to them,
 
01:47:39: but they know not everyone else, because right now you will still load it.
 
01:47:46: If you have wars that are cold, or maybe the users are cold, you'll load all of them at once, and it's just this kind of big chunk.
 
01:47:53: With this system, it will be kind of more spread out.
 
01:47:57: The part I'm not certain about is whether it's worth doing this now, or doing this after making the .NET 9 switch, because it is going to be beneficial on both.
 
01:48:06: So it's one of those optimizations that's smaller, and it's kind of independent of the big move to .NET 9.
 
01:48:14: And it could provide benefit even now, even before we move to .NET 9.
 
01:48:19: And it will still provide benefit afterwards.
 
01:48:22: I'm not 100% decided on this one, I'll have to evaluate it a little bit, and evaluate how other things are going.
 
01:48:30: It's something I want to do, but no hard decision yet.
 
01:48:38: Next, Climber is asking, could you use the red highlighting from broken ProtoFlux in a different color for local computation?
 
01:48:46: I don't really understand the question, I'm sorry.
 
01:48:50: If ProtoFlux is red and it's broken, you need to fix whatever is broken before you start using it again.
 
01:48:58: Usually if it's red, there's something wrong I cannot run at all.
 
01:49:06: I don't think you should be using it anyway. If it's red, you need to fix whatever the issue is.
 
01:49:16: Next, Nitr is asking, are there any plans to open-source certain parts of the code, for example, under the components, and ProtoFlux knows so that the community can contribute to those?
 
01:49:27: There's some plans, nothing's fully formalized yet.
 
01:49:32: We've had some discussions about it.
 
01:49:35: I'm not going to go too much into details, but my general approach is we would essentially do gradual open-sourcing of certain parts, especially ones that could really benefit from community contributions.
 
01:49:51: One example I can give you is, for example, the Importer and Exporter system, and also the Device Driver system, where it's very ripe for open-sourcing
 
01:50:08: where we essentially say, this is the Model Importer, this is the Volume Importer, and so on, and this is support for this format and this format.
 
01:50:19: And we cannot do this. We make the code open, which we offer community contributions, so people could contribute stuff like fixes for formats or some things importing your own.
 
01:50:33: Or alternatively, we want to add support for some obscure format that we wouldn't support ourselves, because you're modding some kind of game or something and you want to mess with things.
 
01:50:48: You now can use the implementation that we provided as a reference, add another Importer, Exporter.
 
01:50:58: Or, if you want to, you need very specific fixes that are relevant to the project you're working on, you just make a fork of one of ours, or even communities, and modify it for its purposes.
 
01:51:10: And you make changes that wouldn't make sense to have in the default one, but that are useful to you.
 
01:51:16: So, that's probably another kind of model I would want to follow, that is initially, we open source things partially, where it makes sense, where it's also easy to do, because open sourcing can be a complicated process if you want it for everything, because there's pieces of code that have certain licensing, and we need to make sure it's all compatible with the licensing, we need to make sure everything's audited and cleaned up.
 
01:51:44: So doing it by chunks, doing just some systems, I feel it's a much more easier, approachable way to start, and we can build from there.
 
01:51:55: The other part of this is, when you do open source something, you generally need maintainers.
 
01:52:03: Right now we don't really have a super good process for handling community contributions for these things, so that's something I feel we also need to heavily improve.
 
01:52:16: And that means we need to have prepared some manpower to look at the community pull requests, make sure we have a good communication there, and make that whole process run smoothly.
 
01:52:26: And also there's been some PRs that piled up against some of our projects, some of our open source parts, and that we haven't really had a chance to prepare to look at because everything has been kind of busy.
 
01:52:39: So I'm a little bit hesitant to do it now, at least until we clear up some more things and we have a better process.
 
01:52:47: So it is also part of the consideration there.
 
01:52:51: But overall, it is something I'd want to do. I feel like you as a community, people are doing lots of cool things and tools.
 
01:53:00: Like the modding community, they do lots of really neat things.
 
01:53:06: And doing the gradual open sourcing, I feel that's a good way to empower you more, to give you more control over these things, give you more control to fix some things.
 
01:53:16: These parts, we're a small team, so sometimes our time is very limited to fix certain niche issues.
 
01:53:24: And if you give people the power to help contribute those fixes, I feel like overall the platform and the community can benefit from those.
 
01:53:34: As well as giving you a way to...
 
01:53:38: A big part of Resonite's philosophy is giving people as much control as possible.
 
01:53:43: Making the experience what you want it to be.
 
01:53:46: And I feel by doing this, if you really don't do something, or how Resonite handles certain things, you can afford that part and make your own version of it.
 
01:54:00: Or fix up the issues. You're not as dependent on us as you otherwise would have been.
 
01:54:07: The flipside to that, and the part that I'm usually worried about, is we also want to do it in a way that doesn't result in the platform fragment.
 
01:54:19: Where everybody ends up on a different version of the build, and then you don't have this shared community anymore.
 
01:54:27: Because I feel, especially at this stage, that can end up hurting the platform.
 
01:54:33: If it happens, especially too early.
 
01:54:36: And it's also one of the reasons I was thinking of going with importers and exporters first.
 
01:54:42: Because they cannot cause fragmentation, because they do not fragment the data model.
 
01:54:49: This just changes how things are brought into, or out of, the data model.
 
01:54:57: It doesn't make you incompatible with other clients.
 
01:55:01: You can have importers for data formats that only you support, and still exist with the same users.
 
01:55:08: And be able to join the same sessions.
 
01:55:11: That's pretty much on the whole open-sourcing kind of thing.
 
01:55:16: And that's the reason I want to approach it this way.
 
01:55:19: Take baby steps there, see how it works, see how everybody responds.
 
01:55:24: See how we are able to handle this, and then be comfortable with this.
 
01:55:32: We can now take a step further, open-source more parts.
 
01:55:37: And just make it a gradual process, then just a big flip of a switch, if that makes sense.
 
01:55:48: We have about 4 minutes left, there's no more questions.
 
01:55:52: So I might... I don't know if this is enough for Ramble, because right now...
 
01:56:01: I don't know what I would ramble about, because...
 
01:56:05: I want to do more Grasslands plus? I've already rambled about Grasslands plus a fair bit.
 
01:56:10: If there's only less many questions, I'll try to answer it, but I might also just end up a few minutes early.
 
01:56:15: And as I end up rambling about rambling, which is some sort of meta rambling, which I'm kind of doing right now...
 
01:56:23: I don't know, I'm actually kind of curious, like, with the Grasslands part thing, is that something you'd like to see? Like, you'd like to play with?
 
01:56:34: Especially if you can bring stuff like this.
 
01:56:36: I can show you... I don't have, like, too many videos of those. I don't have, like, one more... no, wait.
 
01:56:45: Oh! I do have actually one video I didn't want to show. I do need to fetch this one from YouTube, because I haven't, like, imported this one. Hang on.
 
01:57:00: Let's see... Oh, so, like... Actually, let me just do this first before I start doing too many things at once.
 
01:57:08: There we go.
 
01:57:11: So I'm going to bring this one in.
 
01:57:18: Once it loads... This one's from YouTube, so it's, like, actually a little bit worse quality.
 
01:57:24: But this is another Gaussian Splatter. Well, I have lots, but I need to make videos of more.
 
01:57:30: This one I found, like, super neat, because, like, this is a very complex scene. This is from GSR at BLFC.
 
01:57:37: You can see, like, it captures a lot of, like, kind of cool detail, but there's a particle part I want you to pay attention to.
 
01:57:44: I'm going to point this out in a sec, because one of the huge benefits of Gaussian Splatters is they're really good at not only, you know, soft and fuzzy details, but also, you know, semi-transparent stuff.
 
01:57:56: So, watch this thing. Do you see, like, these kind of plastic, well, not plastic, but, like, these transparent, you know, bits? Do you see these?
 
01:58:05: Look at that. It's actually able to, you know, represent that, like, really well, which is something you really don't get, you know, with a traditional mesh-based, you know, photogrammetry.
 
01:58:16: And that's actually one of the things, you know, like, if you wanted to represent it as a mesh, you'd kind of lose that.
 
01:58:22: And that's why, you know, why Gaussians are really good at, you know, other presenting scenes that directional photogrammetry is not.
 
01:58:32: And there's also lots of, like, you know, splats I've done, like, this October, I was, like, visiting the US, and I went to the Yellowstone National Park again.
 
01:58:42: I have lots of scans from there.
 
01:58:45: And, like, you know, a lot of them kind of have, like, you know, some, because there's a lot of geysers and everything, and there's actually, you know, there's, like, steam coming out of the geysers, you know, and there's, like, you know, water, so, like, it's reflective in some places.
 
01:58:56: And I found, with Gaseous Plus, it actually reconstructs pretty well, like, it even captures, you know, some of the steam and air, and gives it, you know, more of a volume.
 
01:59:06: So they're, like, a really cool way, you know, of, like, representing those scenes, and I just kind of want to be able to, you know, bring that in and be, like, you know, publish those on Resonite, and be, like, you know, you want to come see, like, you know, bits of, like, you know, Yellowstone, bits of this and that, you know, like, just go to this world and you can just view it and show it to people.
 
01:59:26: But yeah, that actually kind of filled the time, actually, because we have about 30 seconds left, so I'm pretty much going to end it there.
 
01:59:36: So thank you very much for everyone, you know, for your questions, thank you very much for, like, you know, watching and, you know, and listening, like, my ramblings and explanations and going off tangents.
 
01:59:47: I hope you, like, you enjoyed this episode and thank you also very much, you know, for just, like, supporting Resonite, whether it's, like, you know, like, on our Patreon, you know, whether it's just by making lots of cool content or, you know, sharing stuff on social media, like, whatever, whatever you do, you know, it, it helps the platform and, like, you know, we appreciate it a lot.
 
02:00:16: Warp.

Revision as of 09:43, 7 May 2025

This is a transcript of The Resonance from 2024 December 1.

This transcript is auto-generated from YouTube using Whisper. There may be missing information or inaccuracies reflected in it, but it is better to have searchable text in general than an unsearchable audio or video. It is heavily encouraged to verify any information from the source using the provided timestamps.

00:12: Waaah...

00:19: Hello...

00:20: Well, wait...

00:24: I've actually clicked it on the menu.

00:28: Hello JViden4.

00:33: Hello everyone.

00:36: I'm just making sure everything is running, got all the stream stuff.

00:52: Make sure my audio is all good, can you hear me fine?

00:56: Well, I was loud by 1.3 seconds.

00:59: Also thank you for the cheer Nitra, that was quick.

01:05: Okay, so...

01:06: Let me make sure...

01:12: Channel, there we go.

01:14: Everything should be going.

01:15: So hello everyone.

01:18: Oh boy, I'm getting feedback, there we go.

01:21: Hello everyone, I'm Frooxius, and welcome to the third episode of The Resonance.

01:26: Oh my god, thank you so much for the cheers Emergence and Temporal Shift.

01:30: Hello everyone.

01:32: So, this is the third episode of The Resonance.

01:36: It's essentially like a combination of office hours and podcasts.

01:40: Where you can ask anything about...

01:45: Anything about The Resonite, whether it's its development, philosophy, how things are going with development, its past, its future, where it's heading.

01:54: The goal is to have a combination of Q&A, so you can ask questions.

01:58: Whatever you'd like to know, I try to answer the best of my ability.

02:03: And we also have...

02:05: I'm hearing that the microphone sounds windy, let me double-check that OBS didn't switch microphone on me again.

02:13: Test 1-2. It's in front of my face.

02:18: Properties. Yeah, it's using the right one.

02:22: Is it understandable?

02:25: It's really strange.

02:28: Like, it shouldn't be compressing.

02:31: It's a wireless microphone, but it's an un-custom thing.

02:35: But, anyway, let me know if the voice is OK if it's understandable.

02:42: Anyway, you're free to ask any questions, and I'm also going to do some general talking about the high-level concepts of Resonite, where it's past its, what its future is going to be, which direction we want to head it, and so on.

02:59: One thing, if you want to ask questions, make sure to put a question mark. Actually, let me double-check.

03:07: Oh, I didn't save the version with the auto-add.

03:11: Auto-pin. There we go. OK, now it should be working.

03:15: Make sure to end your question with a question mark, the way it kind of pops on my thing.

03:21: And we already have some light popping up.

03:23: So, with that, we should be good to get started.

03:26: I might switch this camera, just like this. There we go.

03:30: So, hello everyone. Hello, I'm bad at names. Hello Trey Bourg. Hello Lexavo.

03:37: So, we actually got the first question from Lexavo.

03:40: Do you think you'll have any other guests on your show that might focus on specific topics?

03:47: It's possible. Like, the goal of this is kind of like, you know, sort of like my kind of office hours.

03:57: So, it's probably going to be like the main focus, but I'm also kind of playing with the format a bit.

04:03: Unfortunately, Cyro couldn't, like, I usually have Cyro co-hosting because we cannot have good back and forth.

04:07: It's on the technical things, and sort of like, you know, the fields of your Resonite.

04:14: But I kind of like, I don't see where it kind of goes, because I had like some ideas,

04:19: and for the first two streams that I did, there was like so many questions that we didn't really get much to the chill parts of it.

04:26: So we're going to kind of like explore that.

04:28: We might have like some other people as well, kind of talk to them about like specific stuffs of Resonite,

04:33: but I don't have any super specific plans just yet.

04:40: So we'll kind of see, at least for starters I kind of want to keep things simple, you know, take it like essentially baby steps.

04:49: But I feel like probably at some point I'll start bringing in more people as well,

04:54: so we can kind of talk about like how some stuff has been going and so on, but I'm still kind of figuring things out.

05:02: So the next question is from Emergence at Temporal Shift, what is the funnest particle?

05:12: And it kind of depends, like I don't know if you can get like a, maybe, like the thing that comes to mind is like you know particles that don't make sounds,

05:20: but that's one of the things you cannot do right now.

05:24: You make like a particle that does like some kind of funnest plot sound or goes like bling every time, is it like you know bounces or something like that.

05:34: There's actually kind of interesting thing because like it's a request we got in the past, as you know I'm kind of going off attention right now already,

05:42: where you can, where people want to like you know particles that make sound when they collide with something, or they generally want events so they can react.

05:51: The only thing is particles, the simulation runs locally on each person, so it's not like 100% in sync.

05:59: The people will be seeing similar things but not exactly the same.

06:03: So like for, you may have a clump of particles and one goes like, one might you know go like this way and for the other person it goes this way.

06:12: So if you have like one person handling the events, then something might happen you know, like say like for me the particle hits here and for the other person it hits here.

06:21: So if you do some event at this point, then it's going to be like a bit of a disconnect for the users, because for them the particle hit over here or maybe just missed.

06:32: And it's kind of an interesting problem, and one way to kind of approach that kind of thing is make sure your effects are all the things that are local.

06:39: So like for example local sound, that way everybody you know, say if I do like bubbles, and if you like you know bop them and they pop, you will hear the pop sound and it's going to be kind of localized mostly to you.

06:51: And if it's like similar enough, it's just going to be close enough that people will not notice.

06:57: But it's a curious problem, and kind of a little bit of tension for the question.

07:06: So the next one we have JViton4 is asking, I was confused by the stress test. The announcement said it was around on .NET 8 as opposed to .NET 9. Was it a typo or was it meant to establish a baseline?

07:18: So it was kind of neither. We have a .NET 9 headless ready. We kind of wanted to push it, like give people time to prepare.

07:29: But under the theme running the event, they decided to go with the .NET 8 for the testing.

07:35: And the main test wasn't even the performance of the headless itself, it was to make sure that the hardware is running on and the connections is running on is able to handle the event.

07:47: This wasn't as much testing the performance of the headless itself, but more kind of a combination of hardware and make sure we are ready for the event with the setup we have.

07:57: And it was also one of the reasons because we wanted the event to be as flawless as possible.

08:03: The theme decided to go with .NET 8 because .NET 9 technically isn't released yet, so we kind of stuck with that.

08:14: Even though the probability of it breaking things is very low, even if it was a 1% chance that something might break, we wanted to eliminate that 1%.

08:29: GlovinVR is asking, is there any focus on actual tutorial on Resonite? Do new users suffer from not understanding the dashboard, how to find avatars, and how to find people?

08:37: This seems like an easy thing that can be delegated out and does not need to take up any of your bandwidth.

08:41: Yes, that's actually something that the content team is largely working on. They've been looking at ways to improve it with their experience and overhaul some parts of it.

08:50: Even the existing experience that we have has already been mostly delegated.

08:57: It's part of that, because getting new users to the platform, it crosses lots of different systems.

09:04: Because when you start up, we want to make sure that the user has an account, that their audio is working.

09:12: So we guide them through a few initial steps, and that's a part that's more on the coding side.

09:21: If there's problems with that part, or there's something we need to add to that part, then it needs to be handled by our engineering team.

09:27: Then the user gets brought into the in-world tutorial that explains things, and it's mostly handled by the content team.

09:34: We also had Ariel, who's our new person handling all the marketing stuff and development, communications, and so on.

09:44: She's been looking, because she's relatively new to Resonite as well, so she's been using that perspective to be like,

09:51: this is what we could do to smooth out the initial user experience, and she's been talking with the content team.

09:58: And we're looking at how do we improve that experience to reduce frustrations for new users.

10:07: And there's definitely a lot of things you could do.

10:09: The only part is it's always difficult. I won't say it's a simple thing to do,

10:17: because you always have to balance how much information do you give the user at once.

10:23: And in the past, we tried approaches where we told them about everything.

10:26: There's inventory, there's contacts, there's this thing, and this thing, and this thing.

10:31: And what we found ends up happening. A lot of users, they get overwhelmed, and they just shut down,

10:40: and they don't even understand the basic bits. They will not know how to grab things, how to switch locomotions.

10:46: So you kind of have to ease the users into it, build simple interactions, and building those kinds of tutorials takes a fair bit of time.

11:00: There's other aspects to this as well. For example, one of the things that people want to do when they come in here,

11:05: they want to set up their avatar. And the tricky thing with that is that it requires use of advanced tools,

11:13: like the developer tools and so on, it requires use of the avatar creator.

11:17: And the avatar creator is something we want to breathe right, but that's an engineering task.

11:22: That's not something that the content team can do right now.

11:27: So there's a lot of aspects to this. They can build a better tutorial for some things, but some things do require some engineering work.

11:36: And we kind of have to design things in a way that we don't also want to avoid wasting too much effort on certain things,

11:43: because we know we're going to rework stuff like the UI, the inventory UI is going to be reworked at some point.

11:52: So then it becomes a question of how much time do we invest into the tutorial for the current one when we're going to replace it.

11:57: So some of those parts, we might just do a simple tutorial and do a better one later on once the engineering task comes through.

12:07: So there's just a lot of complexities to these kinds of things and it takes time to improve them.

12:16: What helps us the most is getting information about what are the particle frustration points for new users.

12:23: If somebody new comes to the platform, what do they get stuck on? What do they want to do? What's their motivation?

12:30: Because if we even know the user wants to set up their avatar, we can be like, okay, we're going to put things that direct them in the right direction.

12:39: But also with the avatar setup, there's always a combination of how do we make the tooling simpler so we don't need as much tutorial for it.

12:49: Because one of the things we did a few months back is introduce the Resonite packages.

12:54: And if there exists a Resonite package for the avatar that the user wants to use, they just drag and drop, it makes the whole process much simpler.

13:01: We don't have to explain how to use the developer tool, how to use the material tool, we usually just kind of drag and drop and you have a simple interface.

13:10: But that doesn't work in 100% of the cases, so it's a particularly challenging problem.

13:19: It's something we do talk about on the team, it's something that's important to us.

13:24: We want to ease in the onboarding of new users, make them as comfortable as possible.

13:29: And we're kind of working at it from different fronts, both from engineering and from the content side, as well as marketing and communications.

13:41: Let's see...

13:41: Jack the Fox author is asking,

13:43: My question for today is about ProtoFlux. In what direction do you want the language to evolve going forward? What future language features are you looking forward to?

13:52: So there's like a bunch...

14:06: There's actually...

14:08: One of the...

14:12: One of the things about...

14:16: The way I view visual scripting is that it has its drawbacks and has its benefits.

14:24: And one of the drawbacks is that when you write really complex behaviors, it gets a lot harder to manage.

14:32: Where typical text-based programming language might be simpler.

14:37: But one of its benefits is that you literally...

14:39: It's very hands-on.

14:41: You literally drag wires. If I want to control these lights, I just pull things from this and I drag wires.

14:47: And it has a very hands-on feeling. It's very spatial.

14:53: And the way I imagine the optimal way for this to work is to actually be combined with a more typical text-based programming language.

15:06: Where if you have a lot of heavy logic, a lot of complex behaviors...

15:16: It's much simpler to code things that way.

15:22: But then if you want to wire those complex behaviors into the world, that's where visual scripting can come in handy.

15:29: And I think we'll get the most strength by combining both.

15:34: And the way I wanted to approach the typical text-based programming is by integration of WebAssembly.

15:41: Which will essentially allow you to use lots of different languages, even languages like C and C++.

15:50: With those you can bring support for other languages like Lua, Python, lots of other languages.

15:57: Write a little bit complex code, and then some of that code might be exposed as a node.

16:01: And that node you kind of wire into other things, you do maybe a little extra operations.

16:05: It's almost like, if you're familiar with electronics, it's almost like having an integrated circuit.

16:12: And the integrated circuit, it has a lot of the complex logic.

16:16: And it could be written in a typical language, compiled into a WebAssembly module.

16:23: And then the integrated circuit is going to have a bunch of extra things around it that are wired into inputs and outputs.

16:29: And make it easier to interface with things.

16:36: So to me that's the most optimal state, where we have both.

16:40: And we can combine them in a way where you get the strengths of each, and weaknesses of neither essentially.

16:49: That said, there are definitely things we can do to improve ProtoFlux.

16:53: The two big things I'm particularly looking forward to are nested nodes.

17:00: Those will let you create essentially package-like functions.

17:03: You'll be able to define...

17:06: If I... I kinda wanna draw this one in, so...

17:10: I should probably have done this at the start, but...

17:17: I kinda forgot...

17:22: Let's see...

17:23: If I move to the... If I end up moving... This is probably gonna be too noisy visually.

17:30: I gotta pick it up.

17:32: And let's try this. I'm gonna move this over here.

17:41: So...

17:43: Make sure I'm not colliding with anything.

17:46: So the idea is you essentially define a node with your set of inputs.

17:56: And this is my thinking for the interface.

18:00: So this would be your inputs.

18:03: So for example you can have value inputs, you can have some impulse inputs.

18:07: And you have some outputs. It can be values as well as impulses.

18:14: And then inside of the node you can do whatever you want.

18:20: Maybe this goes here, maybe this goes here, this goes here.

18:24: And this goes here, and this goes here, and here.

18:27: And then this goes here.

18:29: Or maybe this goes here, and this goes here.

18:33: And once you define this, you essentially, this becomes its own node that you can then reuse.

18:41: So you get like a node that has the same interface that you defined over there.

18:49: And this is sort of like the internals of that node.

18:52: And then you can have instances of that node that you can use in lots of different places.

18:58: With this kind of mechanism, you'll be able to package a lot of common functionality into your own custom nodes and just reuse them in a lot of places without having to copy all of this multiple times.

19:14: Which is going to help with performance for ProtoFlux, because the system will not need to compile essentially the same code multiple times.

19:22: But it'll also help with the community, because you'll be able to build libraries of ProtoFlux nodes and just kind of distribute those and let people use a lot of your custom nodes.

19:34: So I think that's going to be particularly big feature for ProtoFlux on this kind of lens.

19:41: It's something that's already supported internally by the ProtoFlux VM, but it's not integrated with FrooxEngine yet.

19:51: There's another aspect to this as well, because once we have support for custom nodes, we can do lots of cool things where this essentially becomes like a function, like an interface.

20:04: So you can have systems like, for example, the particle system that I'm actually working on.

20:11: And say you want to write a module for the system, the particle system could have bindings that it accepts.

20:22: The essentially accepts any node that, for example, has three inputs.

20:29: Say, for example, position, lifetime, that's how long the particle has existed, and say direction.

20:44: And then you have output, and the output is a new position.

20:50: And then inside you can essentially do whatever math you want.

20:56: And if your node, if your custom node follows this specific interface, like it has these specific inputs, this specific output, it becomes a thing.

21:05: You can just drop in as a module into the particle system to drive the particle's position, for example, or its color, or other properties.

21:15: And you'll be able to package behaviors and drop them into other known ProFlux functions, and have essentially a way to visually, using the visual scripting, define completely new modules for the particle system.

21:32: But it expands beyond that. You'll be able to do procedural textures.

21:36: Like one node that you might be able to do is one with an interface where you literally have two inputs. Or maybe just one input even.

21:45: Say the UV, that's the UV coordinate and texture, and then a color.

21:53: And then inside you do whatever, and on the output you have a color.

21:59: And if it follows this kind of interface, what it essentially does is you get a texture that's like a square.

22:09: For each pixel, your node gets the UV coordinate and it turns it into a color.

22:15: So if you want to make a procedural texture where each pixel can be computed completely independent of all others, all you need to do is define this.

22:25: Make sure you have UV input, you have color output, and this whole thing can become your own custom procedural texture.

22:33: Where you just decide, based on the coordinate you're in, you're going to do whatever you want to compute pixel color and it's just going to compute it for you.

22:42: And with this, it will also fit in a way that this can be done in a multi-threaded manner.

22:49: Because each pixel is independent, so the code is generating the texture, you can call this node in parallel.

22:58: This is going to be more complicated once. You'll be able to do your own custom procedural meshes, for example.

23:09: The difference is probably going to be a little bit more complicated, because you'll have to build the geometry.

23:15: But essentially, the way that one might work is you get an impulse, and then you do whatever logic you want to build a mesh, and now you have your procedural mesh component.

23:26: And you can just use it like any other procedural component.

23:30: I think once this goes in, this is going to be a particle-powerful mechanism.

23:36: A lot of systems that don't have much to do with ProtoFlux right now, they will strongly benefit from it.

23:44: So this to me is going to be a really big feature of ProtoFlux.

23:51: The other one that I am particularly looking forward to, especially implementing it and playing with it, is the DSP mechanism.

23:59: And what that will let you do is make sort of workflows with the nodes to do stuff like processing audio, processing textures, and processing meshes.

24:12: With those, you'll be able to do stuff like build your own audio studio, or music studio.

24:20: Where you can do filters on audio, you can have signal generators, and you could pretty much use Resonite to produce music or produce sound effects.

24:31: Or you could use it to make interactive audio-visual experience.

24:37: Where there's a lot of real-time processing through audio, and you can feed it what's happening in the world, and change those effects.

24:45: And that in itself will open up a lot of new workflows and options that are not available right now.

24:55: They're a little bit there, but not enough for people to really even realize it.

25:02: So the DSP is a big one. Same with the texture one, you'll be able to do procedural textures, which on itself is also really fun to play with.

25:12: But also you can now, once we have those, you'll be able to use Resonite as a production tool.

25:18: Even if you're building a game in Unity or Unreal, you could use Resonite as part of your workflow to produce some of the materials for that game.

25:26: And it gets a lot of the benefits of having it be a social sandbox platform.

25:32: Because, say you're working on a sound effect, or you're working on music, or working on procedural texture, you can invite people in and you can collaborate in real-time.

25:43: That's given thanks to Resonite's architecture, it's just automatic.

25:48: If you have your favorite setup for studio, for working on something, you can just save it into your inventory, send it to somebody, or just load it, or you can publish it and let other people play with your studio setup.

26:02: The DSP part is also going to be a big doorway to lots of new workflows and lots of new ways to use Resonite.

26:15: I'm really excited for that part, and also part of it is I just love audio-visual stuff.

26:21: You wire a few notes, and now you have some cool visuals coming out of it, or some cool audio, and you can mess with it.

26:28: There's another part for the mesh processing.

26:35: You could, for example, have a node where you input a mesh, and on the output you get a sub-subdivided, smoothed out mesh.

26:48: Or maybe it voxelizes, maybe it triangleizes, maybe it applies a boolean filter, or maybe there's some perturbation to the surface.

26:57: And that feature I think will combine with yet another feature that's on the roadmap, which is vertex-based mesh editing.

27:07: Because you'd essentially be able to do a thing where, say you have a simple mesh, and this is what you're editing.

27:20: And then this mesh, this live mesh, I'm actually going to delete this one in the background because they're a bit bad for the contrast.

27:34: So I'm taking this a little bit for this question, but this is one I'm particularly excited for, so I want to go a little bit in-depth on this.

27:45: Okay, that should be better.

27:47: So you're editing this mesh, and then you have your own node setup that's doing whatever processing, and it's making a more complex shape out of it because it's applying a bunch of stuff.

27:58: And you edit one of the vertex and it just runs through the pipeline, your mesh DSP processing pipeline, and computes new output meshed based on this as an input.

28:09: So you move this vertex, and this one maybe does this kind of thing.

28:15: You do this kind of modeling, if you're just a blender, this is what you do with the modifiers, where you can have simple base geometry and have a sub-different surface, and then you're moving vertices around, and it's updating the more complex mesh by processing with modifiers.

28:33: The mesh DSP combined with the vertex editing will allow for a very simple workflow, but one that I feel is even more powerful and flexible, and also will probably add more performance because our processing pipeline is very asynchronous.

28:50: Because when I mess with Blender, one of the things that kind of bugs me is if you use modifiers, it takes a lot of processing, the whole interface essentially lags.

29:02: The way stuff is versioned in Resonite is you will not lag as a whole, but only the thing that's updating will maybe take, say this takes a second to update and I move the vertex, I'll see the result in a second, but I will not lag entirely for a second.

29:17: So that itself I think will combine really well with lots of upcoming features and all sorts of existing features.

29:25: And for me that's just a big part, even just beyond ProtoFlux, it's how I like to design things.

29:36: This way where each system is very general, it does its own thing, but also it has lots of ways to interact with lots of other systems.

29:58: So this should cover it. I'm going to hop back here.

30:05: I went also deep on this particular question, but hopefully that kind of sheds some idea on some of the future things and things I want to do, not just in ProtoFlux, but with other things.

30:26: There we go. Sorry, I'm just settling back in.

30:33: I hope that answers the question in good detail.

30:41: So next question we have...

30:47: Troy Borg is asking, you said you had a side project you wanted to work on when you get done with particle system before starting audio system rework. Are you able to talk about it?

30:57: Yes, so the thing I was kind of thinking about is...

31:05: Essentially I've been playing a lot with Gaussian Splathing recently, and I can actually show you some of the videos.

31:15: Let me bring some of my Splats.

31:19: The only way I can actually show you these is through a video.

31:23: This is one I did very recently, so this is probably one of the best scans.

31:32: You can see if I play this, I almost ask you if this is loaded for you, but it only needs to be loaded for me.

31:40: If you look at this, this is a scan of a fursuit head of a friend who is here in Czech Republic.

31:47: His name is Amju.

31:48: He let me scan his fursuit head.

31:51: I first reconstructed it with a traditional technique, but then I started playing with Gaussian Splat Software.

31:56: I threw the same dataset at it, and the result is incredible.

32:00: If you look at the details of the fur, the technique is capable of capturing the softness of it.

32:09: It just looks surreal.

32:15: That's the easiest way to describe it.

32:19: It gives you an incredible amount of detail, while still being able to render this at interactive frame rates.

32:28: I've been 3D scanning for years.

32:31: I love 3D scanning stuff and making models of things.

32:35: And this technique offers a way to reconstruct things.

32:41: I can actually show you how the result of this looks with traditional photogrammetry.

32:54: So if I bring this, you see this is traditional mesh.

32:58: And it's a perfectly good result. I was really happy with this.

33:03: But there's no softness in the hair. There's artifacts around the fur.

33:10: It gets kind of blob-ish. It loses its softness that the Cassian Splats are able to preserve.

33:18: This is another kind of example.

33:20: I took these photos for this in 2016. That's like 8 years ago now.

33:29: And also, if you just look at the part, it just looks real.

33:35: I'm really impressed with the technique. I've been having a lot of fun with it.

33:41: And on my off-time, I've been looking for ways...

33:53: I'm kind of looking at how it works.

33:56: And the way the Cassian Splats work, it's relatively simple in principle.

34:01: It's like an extension of point cloud, but instead of just tiny points,

34:05: each of the points can be a colorful blob that has fuzzy edges to it,

34:11: and they can have different sizes.

34:12: You can actually see some of the individual splats.

34:16: Some of them are long and stretched. Some of them can be round.

34:19: Some are small, some are big.

34:22: And they can also change color based on which direction you're looking at them from.

34:28: So I've been essentially looking for a way,

34:30: can we integrate the Cassian Splatting rendering into Resonite?

34:35: I'm fairly confident I'll be able to do it at this point.

34:39: I understand it well enough to make an implementation.

34:43: The only problem is I don't really have time to actually commit to it right now

34:46: because I've been focusing on finishing the particle system.

34:49: But the thing I wanted to do is, after I'm done with the particle system,

34:55: mix in a smaller project that's more personal and fun,

35:00: just like a mental health break, pretty much.

35:03: It's something to do this primarily for myself,

35:09: because I want to bring those scans in and showcase them to people.

35:15: I'm still 100% excited, I'll see how things go,

35:18: but I'm itching to do this and doing a little bit of research

35:23: on the weekends and so on to get this going.

35:28: It's something I like to do.

35:33: Also something that a lot of people would appreciate as well,

35:35: because I know there's other people in the community

35:38: who were playing with Splats and they wanted to bring them in.

35:41: I think it would also make Resonite interesting to a lot of people

35:46: who might not even think about it now,

35:47: because it's essentially going to give you a benefit

35:50: to visualize the gas in Splats in a collaborative sandbox environment.

35:55: It might even open up some new doors.

35:58: I'm not 100% decided, but pretty much this is what I've been thinking about.

36:08: Next, Noel64 is asking,

36:09: Are there plans to add Instant Cut options for cameras?

36:13: The current flight from one place to another seeking looks a bit weird with the overloading distances.

36:17: You can already do this, so there's an option.

36:21: I just have it at default, which does have the fly,

36:29: but there's literally a checkbox in my UI,

36:32: Interployed between Anchors.

36:33: If I uncheck that and I click on another,

36:36: like, you know, this is instant,

36:37: I will click over there if I can re-

36:39: No, there's a collider in the way.

36:43: I'm just going to do this.

36:44: I click on it, you know, I'm instantly here.

36:47: So that feature already exists.

36:49: If it kind of helps, I can just, you know, keep this one on

36:52: so it doesn't do the weird fly-through.

36:57: But yes, I hope I have the answers to the question.

37:02: Next, Wicker Dice.

37:04: What would you like a Resonite to be in five years?

37:06: Is there a specific goal or vision?

37:08: So for me, like, the general idea of Resonite is

37:16: it's kind of hard to put it in words sometimes

37:18: because in a way that would be a good way to communicate

37:22: but it's almost like, it's like a layer.

37:29: It's like a layer where you have certain guarantees.

37:33: You're guaranteed that everything is real-time synced.

37:36: Everything is real-time collaborative.

37:38: Everything is real-time editable.

37:43: You have integrations with different hardware.

37:45: You have persistence.

37:47: You can save anything, whether it's locally or through cloud,

37:51: but everything can be persisted.

37:55: And what I really want Resonite to be is

37:58: be this layer for lots of different workflows

38:03: and for lots of different applications.

38:06: The earlier one is social VR, where you hang out with friends,

38:10: you're watching videos together,

38:13: you're playing games together,

38:16: you're just chatting or doing whatever you want to do.

38:21: But if you think about it, all of it is possible

38:23: thanks to this baseline layer.

38:26: But there's also other things you can do

38:27: which also benefit from that social effect.

38:30: And it kind of ties into what I've been talking about earlier,

38:34: which has to do with using Resonite as a work tool,

38:37: as part of your pipeline.

38:39: Because if you want to be working on music,

38:43: if you want to be making art,

38:45: if you want to be doing some designing and planning,

38:49: you still benefit from all these aspects of the software,

38:53: being able to collaborate in real time.

38:56: If I'm working on something and showing something,

38:59: you immediately see the results of it.

39:03: You can modify it and can build your own applications on it.

39:07: People, given Resonite's nature, can build their own tools.

39:12: And then share those tools with other people as well, if you want to.

39:17: So for me, what I really want Resonite to be

39:20: is a foundation for lots of different applications

39:23: that goes beyond just social VR,

39:27: but which enriches

39:29: pretty much whatever task you want to imagine

39:34: with that social VR,

39:37: with the real-time collaboration,

39:39: and persistence, and networking, and that kind of aspect.

39:42: Think of it as something like Unity,

39:45: or your own Unreal, because those engines,

39:47: or Godwell, I shouldn't forget that one,

39:51: these engines,

39:54: they're maybe primarily designed for building games,

39:56: but people do lots of different stuff with them.

39:59: They build scientific visualization applications,

40:03: medical training applications,

40:05: some people build actual just utilities with them.

40:11: They're very general tools,

40:13: which solve some problems for you,

40:16: so you don't have to worry about

40:17: low-level graphics programming in a lot of cases,

40:19: you don't have to worry about having

40:21: a basic kind of functional engine.

40:24: You kind of get those for free, in quotes.

40:29: In a sense, you don't need to spend time on them,

40:31: that's already provided for you.

40:33: And you can focus more on what your actual application is,

40:36: whether it's a game, whether it's a tool,

40:38: whether it's a research application,

40:41: whatever you want to build.

40:43: I want the Resonite to do the same, but go a level further,

40:46: where instead of just providing the engine,

40:49: you get all the things I mentioned earlier.

40:51: You get real-time collaboration.

40:53: Whatever you build, it supports real-time collaboration.

40:57: It supports persistence, you can save it.

41:00: You already have integrations with lots of different hardware,

41:03: interactions like grabbing things, that's just given.

41:07: You don't have to worry about that,

41:09: you can build your applications around that.

41:13: I want the Resonite to be almost the next level,

41:21: beyond game engines.

41:25: Another kind of analogy I use for this is,

41:28: if you look at early compute thing,

41:32: when computers were big and room-scaled,

41:36: the way they had to be programmed is with punch cards, for example.

41:39: I don't know if that was the very first method,

41:41: but it's one of the earliest.

41:43: And it's very difficult because you have to write your program,

41:47: and you have to translate it in the individual numbers on the punch card,

41:51: and then later on there came assembly programming languages.

41:55: And those made it easier,

41:57: they let you do more in less time,

42:01: but it was still like, you have to think about managing your memory,

42:04: managing your stack.

42:06: You need to decompose complex tasks into these primitive instructions,

42:11: and it still takes a lot of mental effort.

42:14: And then later on came higher-level programming languages.

42:18: I'm kind of skipping a lot, but say C, C++, C-sharp,

42:24: and languages like Python.

42:26: And they added further abstractions where, for example,

42:29: of its modern C and C++,

42:33: you don't have to worry about memory management as much,

42:35: at least not managing your stack.

42:38: And now some of the things you have to worry about,

42:42: they're automatically managed.

42:44: You don't even have to think about them.

42:46: You can just focus my function, accept these values,

42:49: output this value, and it generates appropriate stack management code for you.

42:57: And then came tools built with those languages,

43:00: like I mentioned, Unity or Unreal,

43:02: where you don't have to worry about, or Godot,

43:05: where you don't have to worry about having the game engine,

43:09: being able to render stuff on screen.

43:10: That's already provided with you.

43:13: And with Resonite,

43:15: the goal is to essentially move even further along this kind of progression

43:19: to make it where you don't have to worry about the networking aspect,

43:23: the persistence aspect, integrations with hardware,

43:26: you're just given that,

43:28: and you can focus more of your time

43:30: on what you actually want to build in that kind of environment.

43:34: So that's pretty much,

43:36: that's the big vision I have on my end

43:39: for what I want Resonite to be.

43:43: I think Easton is asking,

43:46: what are your thoughts on putting arrows on generic type wires?

43:54: I'm not actually sure if I fully understand that one.

43:58: I don't know what you mean, generic type wires.

44:01: Do you mean wires that are of the type type?

44:06: I probably need a clarification for this one.

44:11: Sorry, I can't...

44:13: I don't know how to interpret the particle question, so I'll...

44:17: Oh, he's asking arrows and wires.

44:24: I think the impulse ones actually have arrows.

44:30: I'm not really sure, I probably need to see an image or something.

44:35: Next, zitjustzit is asking,

44:37: select boxes of code that click inputs and give outputs,

44:39: allowing for coding interface with flux without having to build some parts of the function using the nodes.

44:44: Yes.

44:47: Your protoflux node becomes a function that other systems can call

44:53: without even needing to know its protoflux.

44:57: They're just like, I'm going to give you these values and I expect this value as the output,

45:02: and if your node matches that pattern, then you can give it to those other systems.

45:10: Next question, Treyborg, that does sound amazing,

45:14: is that something for After Sauce custom protoflux nodes?

45:18: So, it's not related to Sauce, that's fully Froox Engine side,

45:22: so technically it doesn't matter whether it happens before or Sauce,

45:28: it's not dependent on it in any way.

45:32: There is a part that is, which is having custom shader support,

45:35: which you do want to do with protoflux, that one does require switch to Sauce,

45:41: because with Unity, the options to do custom shaders are very limited,

45:47: and very kind of hacky.

45:50: So, that one will probably wait, but for the parts I was talking about earlier,

45:56: those will happen regardless of when Sauce comes in.

45:59: It might happen after Sauce comes in, it might happen before it comes in,

46:03: but this is just purely how the timing ends up working out,

46:07: and how the prioritization ends up working out.

46:10: Next question, ShadowX, I'm just checking time.

46:14: ShadowX, with nested nodes, will custom nodes be able to auto-update

46:17: when source template for Scrib node is changed?

46:20: Yes, there are multiple ways to interpret this as well,

46:24: but if you have a template, and you have it used in lots of places,

46:28: if you change the internals of the node, every single instance is going to be reflected.

46:33: So you can actually have it used in lots of objects in the scene,

46:40: and you need to change something about its internals,

46:42: everything is going to be reflected in the scene.

46:45: The other interpretation is if you make a library of nodes,

46:51: and say you reference that in your world,

46:54: and the author of the library publishes an updated version,

46:57: is that going to auto-update other worlds, which do use that library?

47:04: That would be handled by the Molecule system,

47:08: which is our planned system for versioning,

47:12: and we want to use it not just for Resonite itself,

47:15: but also for ProtoFlux, so you can publish your library functions and so on.

47:20: And with that, what we do is let you define rules on when to auto-update and what not.

47:27: We probably follow something like semantic versioning,

47:30: so if it's a minor update, it auto-updates unless you disable that as well.

47:36: If it's a major update, it's not going to auto-update unless you specifically ask it to.

47:42: So that's going to be the other part of it.

47:44: That one's definitely going to give you more of a choice.

47:51: Next question, TroyBorg.

47:53: So could I have something like BubbleNode that has all the code floating around randomly,

47:58: the random lifetime on it?

48:01: I'm not really sure what you mean, BubbleNode,

48:03: but pretty much you can package all the code for whatever you want the bubble to do in that node,

48:10: and it doesn't need to be, for example, my bubbles.

48:16: Oh, that didn't work.

48:20: There we go, like this one.

48:21: You see, I have this bubble, and this bubble, it has code on it that handles it flying around,

48:32: and right now, when I make the bubble, it actually duplicates all the code for the bubble,

48:36: which means the ProtoFlux VM needs to compile all this code.

48:40: It's relatively simple, so it's not as much, but still, it adds up,

48:44: especially if you had hundreds of these, or thousands.

48:49: With the nested nodes, all this bubble will need to do is reference that template,

48:54: and only need one of it, which means it's just going to reuse the same compiled instance of the ParticleNode

49:01: instead of duplicating literally the entire thing on each of the objects you make independently.

49:11: Next, a question from Toiberg.

49:16: For processing textures, you could do stuff like level curves adjustments,

49:21: like in Blender Classroom, the texture for Albedo, they adjust it with levels,

49:25: and grayscale then plug into a heightmap instead of separate texture.

49:32: I cannot fully understand how to map this question to Resonite,

49:36: because I'm not familiar with Blender Classroom,

49:40: but you could define your own procedural texture and then use it in other stuff.

49:46: The procedural texture, it will end up as actual bitmap.

49:49: It's going to run your code to generate the texture data,

49:53: upload it to the GPU, and at that point it's just the normal texture.

50:00: But you're able to do stuff like that, or at least look similar.

50:03: Next question, Dusty Sprinkles. I'm also just checking how many of these.

50:07: Fair bit of questions. Dusty Sprinkles is asking,

50:13: When we get custom nodes, do you think we'll be able to create our own default node UI?

50:18: I could see using AudioDSP for custom nodes to make DOS.

50:23: So the custom UI for the ProtoFlux nodes that's completely independent from custom nodes.

50:31: It's something we could also offer, but it's pretty much a completely separate feature.

50:37: Because the generation of the node is technically also the main ProtoFlux itself.

50:43: It's the UI to interface with the ProtoFlux.

50:47: We could add mechanisms to be able to do custom nodes.

50:51: There are some parts of that that I'm a little bit careful with,

50:55: because usually you can have hundreds or thousands of nodes,

50:59: and having customizable systems can end up like a performance concern.

51:05: Because the customization, depending on how it's done,

51:09: it can add a certain amount of overhead.

51:12: But it's not something we're completely close to,

51:15: it's just like we're probably going to approach it more carefully.

51:19: It's not going to come as part of custom nodes, though.

51:22: Those are independent.

51:27: BlueCyro, oh, Cyro unfortunately is late,

51:30: but I don't know if I can bring out it easier,

51:34: because I already have this set up without, I'm sorry.

51:41: Jay Wyden is asking internally how prepared is the engine

51:45: to take full advantage of modern net pass to JIT.

51:48: There's been lots of things since framework, like spans,

51:50: to avoid ologs and unsafe methods, think bitcasting,

51:53: that can make things way, way, way faster.

51:56: Other areas where we use the new features in the headless client

51:59: through pre-processed directives or something.

52:02: We do use a number of features that are backported to older versions,

52:08: like you mentioned spans, using stack allocations and so on,

52:13: and we expect those to get a performance uplift

52:17: with more than runtime because those are specifically optimized for it.

52:21: So there's parts of the engine, especially anything newer,

52:25: like we've been trying to use the more modern mechanisms where possible.

52:31: There are bits where we cannot really use the mechanisms,

52:34: or if we did, it would actually be detrimental right now.

52:38: There's certain things like, for example, vectors library.

52:45: That one, with modern net, it runs way faster,

52:48: but if we used it right now, we would actually run way slower

52:52: because with the unit as mono, it just doesn't run well.

52:59: There's certain things which, if we did right now,

53:04: would essentially end up hurting performance until we make the switch,

53:08: but we tend to use a different approach that's not as optimal for the more modern net,

53:12: but is more optimal now.

53:15: So there may be some places, like in code, you can see that,

53:18: but where possible, we try to use the modern mechanisms.

53:21: There's also some things which we cannot use just because they don't exist

53:25: with the version of net, and there's no way to backport them.

53:27: For example, using SMI, the Intrinsics, to accelerate a lot of the math.

53:34: That's just not supported under all the versions, and there's no way to backport it,

53:38: so we cannot really use those mechanisms.

53:43: Once we do make the switch, we expect a pretty substantial performance improvement,

53:49: but part of why we want to do the switch, especially as one of the first tasks towards performance,

53:56: is because it'll let us use all the mechanisms going forward.

54:02: If we build new systems, we can take full advantage of all the mechanisms that modern net offers,

54:09: and we can, over time, also upgrade the old ones to get even more performance out of it.

54:16: Overall, I can expect big performance, even for the parts that are not most prepared for it,

54:21: just because the code generation quality is way higher,

54:32: but we can essentially take the time to get even more performance by switching some of the approaches,

54:42: the more performance ones.

54:45: By making the switch, then all itself is going to be a big performance boost, but that's not the end of it.

54:50: It also opens doors for doing even more, like following that.

55:05: Splats, you kind of have to implement the support for yourself.

55:09: The output is in a mesh.

55:11: The idea of Gaussian splatting is you're essentially using a different type of primitive to represent your...

55:20: I don't want to say mesh, because it's not a mesh.

55:24: It essentially represents your model, your scene or whatever you're going to show.

55:30: Instead of a triangle, vertices and triangles that we have in traditional geometry,

55:34: it's completely composed from Gaussians, which are a different type of primitive.

55:41: You can actually use meshes to render those.

55:45: From what I've looked, there's multiple ways to do the rendering.

55:51: One of the approaches for it is you essentially include the splat data,

56:01: and then you just render this. You use the typical GPU rasterization pipeline to render them as quads,

56:08: and then the shader does the sampling of spherical harmonics so it can change color based on the angle you look at it from and other stuff.

56:16: There's other approaches that implement the actual rasterization in compute shader,

56:23: and this can lead to more efficient ones, and at that point you're not using traditional geometry,

56:29: but the approach is kind of varied, there's lots of ways to implement them.

56:34: Next question, Troy Borg. Is that something you need special software to create them,

56:38: or is it just something in Resonite so it knows how to render that new format for a 3D scan?

56:43: We essentially need a code to render them. The gist of it, the way you approach it,

56:51: is you get your data set, your Gaussian splat, it's essentially points with lots of extra data.

56:59: You have the size of it, and then the color is encoded using something called spherical harmonics.

57:06: That's essentially a mathematical way to efficiently encode information on the surface of a sphere,

57:13: which means you can sample it based on the direction.

57:17: If you consider a sphere... I should've grabbed my brush.

57:24: I'm gonna grab a new one, because I'm too lazy to go over there.

57:33: Brushes... let's see... I'm not gonna go over there, because it's just a simple doodle.

57:38: Say you have this sphere in 2D, it's a circle.

57:45: And if it's a unit sphere, then each point on the sphere is essentially a direction.

57:50: So if you have information encoded on the surface of your sphere, then each point...

57:56: If I'm the observer here, so this is my eye, and I'm looking at this,

58:02: then the direction is essentially the point from the center of the sphere towards the observer.

58:08: I use this direction to sample the function, and I get a unique color for this particle direction.

58:13: If I look at it from this direction, then I get this direction, and I sample the color here,

58:22: and this color can be different than this color.

58:24: And this is a way that the Gaussians, plus they were really good at encoding stuff like reflections for example,

58:31: because with reflection, the color on the point, it literally changes based on the angle you look at it from.

58:43: And it's the spherical harmonics that actually take the bulk of the data for a Gaussian spot,

58:49: because from what I've seen, they use third-order spherical harmonics,

58:55: which means for each point you actually have 16 colors, which is quite a lot.

59:00: And a lot of the work is how do you compress that in a way that the GPU can decode very fast on the fly,

59:09: and eat all your VRM.

59:13: But essentially, you write your code, to answer the question more radically,

59:17: you write your code to encode it properly, and then render it as efficiently as you can.

59:24: And you can utilize some of the existing rasterization pipelines as well, to save you some time.

59:31: Zita Zit is asking, I don't have a good understanding of splats, but aren't they essentially particles?

59:36: So I kind of went over this a few times, they're not particles in the sense of a particle system,

59:41: there's some overlap, because each splat is a point, but it has a lot of additional data to it,

59:47: and it's also not a tiny small particle, but it can be variously sized color blob.

59:56: Next question, does this bring calls? So it kind of makes pause with any 3D scans, I don't really get them, but they're neat.

01:00:02: So the dataset I use for mine is essentially just photos, it's the same approach I use for traditional.

01:00:21: There's different ways to make them, but the most common one that I've seen is you usually just take lots of photos,

01:00:29: you use traditional approach, the photos get aligned in a space, and then you sort of estimate the depth.

01:00:39: Like with traditional 3D construction, except for the splats, it doesn't really estimate the depth.

01:00:44: The way I've seen it done in the software I use is it starts with a sparse point cloud that's made from the type points from the photos,

01:00:52: it's essentially points in space that are shared between the photos, and it generates splats from those.

01:01:00: And the way it does it is, I believe it uses gradient descent, which is a form of machine learning,

01:01:07: where each of the splats is actually taught how it should look so it matches your input images.

01:01:15: So that's usually the longest part of your reconstruction process, because it has to go through a lot of training,

01:01:23: like I said, post shot, it runs usually several dozen thousand training steps,

01:01:38: and usually in the beginning you can see the splats are very fuzzy and they're just moving around,

01:01:43: and they're settling into space and getting more detail, and it also adds more splats in between them where it needs to add more detail.

01:01:51: So there's like a whole kind of training process to it.

01:01:56: I actually have a video I can show you, because there's also a relevant question I can see.

01:02:04: So I'm gonna...

01:02:08: Because ShadowX is asking, does all common splatting software encode spherical harmonics?

01:02:14: I never noticed color changes in my scans in a scanning verse and post shot.

01:02:17: So I know for sure post shot does it, I don't know about scanning verse because I don't use that one.

01:02:22: It's possible they simplify it, because I've seen some implementations of Gaussian splats,

01:02:27: they just throw away the spherical harmonics and encode a single color, which saves tons of space,

01:02:31: but you also lose one of the big benefits of them.

01:02:37: But I can tell you, post shot definitely does it, and I have a video that showcases that pretty well.

01:02:47: So this is a kind of figment that I did for sure, and I reprocessed this with Gaussian splatting,

01:02:55: and watch the reflections on the surface of the statue.

01:03:00: You can see how they change based on where I'm looking from, it's kind of subtle.

01:03:05: If I look at the top, there's actually another interesting thing, and I have another video for this,

01:03:10: but Gaussian splatting is very useful if you have good coverage from all possible angles,

01:03:18: because the way the scanning process works, like I mentioned earlier,

01:03:22: they are trained to reproduce your input images as close as possible,

01:03:27: which means for all the areas where you have photo coverage from, they look generally great.

01:03:34: But if you move too far away from them, like in this case for example from the top,

01:03:38: I was not able to take any pictures from the top of it, it actually kind of like,

01:03:45: they start going a bit funky. Do you see how it's all kind of fuzzy and the color is blotchy?

01:03:56: And that's kind of like, for one it shows, it does encode the color based on the direction,

01:04:05: but it also shows one of the downsides of it, because I have another video here.

01:04:12: So this is a scan of, this is like, I don't even know how long ago, this was like over six years ago,

01:04:24: when my family from Ukraine, they were visiting over because my grandma, she was Ukrainian,

01:04:31: and they made borscht, which is like another kind of traditional kind of like foods,

01:04:36: and I wanted to scan it, but I literally didn't have time because they put it on disk,

01:04:40: I was like, I'm gonna scan it, and I was only able to take three photos before they started moving things around.

01:04:46: But it actually made for an interesting scan, because I was like, how much can I get out of three photos?

01:04:52: And in the first part, this is traditional scan with a mesh surface that's done with, you know,

01:04:58: with I guess a metal shape. Oh, I already switched to the other one.

01:05:02: So you see, all the reflections, they're kind of like, you know, baked in.

01:05:05: It doesn't actually look like metal anymore.

01:05:09: There's, you know, lots of parts missing because literally they were not scanned,

01:05:13: but the software was able to estimate the surface.

01:05:18: It knows this is a straight surface. If I look at it from an angle,

01:05:21: apart from the missing parts, it's still coherent.

01:05:24: It still holds shape.

01:05:27: With Gaussian Splathing, it doesn't necessarily reconstruct the actual shape.

01:05:32: It's just trying to look correct from angles, and you'll be able to see that in a moment.

01:05:37: So this is the Gaussian Splat, and you see it looks correct,

01:05:40: and the moment I move it, it just disintegrates.

01:05:43: Like, you see, it's just a jumble of, you know, colorful points,

01:05:46: and it's because all the views that I had, they're like relatively close to each other,

01:05:51: and for those views, it looks correct because it was trying to look correct,

01:05:56: and because there's no cameras, you know, from the other angles,

01:05:59: the Gaussians are free to do, you know, just whatever.

01:06:02: Like, they don't have anything to constrain them, you know, to look particle way,

01:06:06: so it just ends up a jumble.

01:06:08: And that's a very kind of, to me, that's a very kind of interesting way

01:06:13: to visualize the differences between the scanning techniques.

01:06:18: But yeah, that's kind of what we did, like answer to, yes, they do encode the spherical harmonics,

01:06:23: and you can make it like, you know, pretty much with any scans,

01:06:25: but the quality of the scan is going to depend, you know, on your data set.

01:06:31: And I'll be kind of throwing, because I have like, terabytes of like, you know, 3D scans,

01:06:35: I'll be just throwing everything, you know, the software and seeing what it like ends up producing.

01:06:41: I know there's also other ways, there's like, you know, some software,

01:06:44: let's just double check in time, there's also, you know, some software that just generates it,

01:06:52: like, you know, with AI and stuff, but like, I don't know super much about that.

01:06:56: So there's like other ways to do them, but I'm mostly familiar with the one, you know, with,

01:07:02: I'm mostly familiar with, you know, like using photos as a dataset.

01:07:09: Next question, jviden4, ProtoFlux is a VM, it compiles things.

01:07:15: So, technically, VM and compiling, those are like two separate things.

01:07:19: Also, Epic is asking what is a ProtoFlux VM, so I'm going to just combine those two questions into one.

01:07:27: So yes, it is a VM, which essentially means it's sort of like, you know, it has a defined sort of like runtime.

01:07:33: Like it's a technically stack-based VM.

01:07:38: It's a, how do I put it, it's essentially sort of like an environment where the code of the nodes, you know,

01:07:45: it knows how to work with a particle environment that sort of isolates it, you know, from everything else.

01:07:55: It sort of compiles things, it's sort of like a halfway step.

01:07:59: It doesn't, it doesn't directly produce machine code from actual code.

01:08:06: The actual code, you know, of the individual nodes that ends up being a machine node for the node itself,

01:08:11: by the way, it kind of operates with the VM.

01:08:15: What ProtoFlux does, it builds something called execution lists and evaluation lists.

01:08:20: It's kind of like, you know, a sequence of nodes or sequence of impulses.

01:08:23: It's going to look at it and be like, okay, this executes, then this executes, then this executes,

01:08:27: and builds a list, and then during execution it already has the pre-built list,

01:08:32: as well as, like, it resolves things like, you know, stack allocation.

01:08:36: It's like, okay, this node needs to use this variable and this node needs to use this variable.

01:08:39: I'm going to allocate this space on the stack and, you know,

01:08:42: and I'm going to give these nodes the corresponding offsets so they can, you know,

01:08:48: get to and from the stack.

01:08:53: So it's kind of like, you know, it does a lot of kind of like the building process.

01:08:55: It doesn't end up as full machine code.

01:08:57: So like, I would say it's sort of like a halfway step towards compilation.

01:09:02: Eventually we might consider like, you know, doing sort of a JIT compilation where it actually

01:09:06: makes, you know, full machine code for the whole thing,

01:09:10: which could help like improve the performance of it as well.

01:09:13: But right now it's, it is a VM

01:09:18: that we will sort of halfway compilation step to kind of speed things up.

01:09:22: It also like does it like, you know, validate certain things.

01:09:25: Like, for example, you have like, you know, infinite kind of continuation loops,

01:09:29: like certain things are essentially like illegal.

01:09:33: Like you cannot have those be a valid program,

01:09:37: which kind of helps avoid, you know, some,

01:09:40: having some kind of issues where we have to figure out certain problems at runtime.

01:09:47: But in short, like the ProtoFlux VM, it's like a way for,

01:09:50: you know, the ProtoFlux to essentially do its job. It's like an environment, execution environment,

01:09:57: you know, that defines how it kind of works

01:10:00: and then all the nodes can operate within that environment.

01:10:05: Next question, Nitra is asking, is the current plan to move the graphical client to .NET 9

01:10:10: architecture before moving to Sauce? Yes, so we are currently,

01:10:16: I might actually just do, I've done it on the first one, but since I have a little bit better setup,

01:10:21: I might do it again just to get like, you know, a better view. Let me actually get up for this one.

01:10:26: I'm going to move over here. There we go.

01:10:31: So I'm going to move over here. I already have my brush that I forgot earlier.

01:10:34: I'm going to clean up all this stuff.

01:10:41: Let's move this to give you like a gist

01:10:44: of like the performance update. There we go. Clean all this up.

01:10:50: I'm not going to hit all. Grab my brush.

01:10:55: There we go. So right now, a little bit more.

01:10:58: Right now.

01:11:03: So the way like the situation is right now.

01:11:12: So imagine

01:11:13: this is Unity. I'm actually going to write it here.

01:11:20: And this is within the Unity,

01:11:23: we have FrooxEngine.

01:11:28: So this is FrooxEngine.

01:11:34: So Unity, you know,

01:11:39: it has its own stuff.

01:11:43: And with FrooxEngine, most things in FrooxEngine,

01:11:47: they're actually fully contained within FrooxEngine. So there's like lots of systems

01:11:52: just going to throw them as little boxes. And they're all kind of like fully contained. Unity has no idea

01:11:58: that they even exist. But then there's,

01:12:02: right now there's two systems which are sort of shared between the two.

01:12:09: There's a particle system.

01:12:15: And then there's the audio system.

01:12:23: So those two systems, they're essentially a hybrid.

01:12:28: Where FrooxEngine doesn't work and Unity doesn't work. And they're very kind of

01:12:32: intertwined. There's another part when FrooxEngine communicates with

01:12:37: Unity, there's other bits. There's also lots of

01:12:42: little connections between things

01:12:46: that tie the two together.

01:12:51: And the problem is, Unity uses something called Mono, which is a runtime,

01:12:57: it's also actually like a VM, like the ProtoFlux VM but different. But essentially it's responsible

01:13:02: for taking our code and running it. Translating it into instructions for

01:13:07: CPU, providing all the base library

01:13:13: implementations and so on. And the problem is,

01:13:17: the version that Unity uses, it's very old and it's very slow.

01:13:22: And because all of the FrooxEngine is running inside of it,

01:13:27: that makes all of this slow as well.

01:13:34: So what the plan is, in order to get a big performance update,

01:13:39: is first we need to simplify, we need to

01:13:43: disentangle the few bits of Froox Engine from Unity as much as possible.

01:13:49: The part I've been working on is the particle system.

01:13:56: I'll probably start this thing next week. It's called PhotonDust,

01:14:01: it's our new in-house particle system, and the reason we're doing it

01:14:06: is so we can actually take this whole bit...

01:14:11: I might just redraw it. I wanted to make a nice visual part

01:14:16: but it's not cooperating.

01:14:20: I'm just going to do this,

01:14:23: and then I'll do this, and just particle system, audio system.

01:14:27: So what we do, we essentially replace this with this. We make it fully contained

01:14:33: inside of Froox Engine. Once that is done, we're going to do the same for the audio engine,

01:14:38: so it's going to be also fully contained here, which means we don't have ties here.

01:14:42: And then this part, instead of lots of little wires,

01:14:48: we're going to rework this. So all the communication

01:14:52: with Unity happens via a very nicely defined

01:14:57: package, where it sends the data, and then the system

01:15:02: will do whatever here. But the tie to Unity is now

01:15:12: rendered, and some stuff that needs to come back is sent over a very well-defined

01:15:17: interface that can be communicated over some kind of

01:15:22: inter-process communication mechanism. Probably a combination of

01:15:27: a shared memory and some pipe mechanism.

01:15:33: Once this is done, what we'll be able to do, we could actually take Froox Engine

01:15:37: and take this whole thing out, if I can't even grab the whole thing, it's being unwieldy.

01:15:44: Just pretend this is smoother than it is.

01:15:48: They'll take it out into its own process.

01:15:52: And because we now control that process, instead of being abandoned with Unity,

01:15:57: we can use .NET 9.

01:16:03: And this part, this is majority

01:16:06: of where time is spent running, except when it comes to rendering,

01:16:11: which is the Unity part. Which means, because we'll be able to run with .NET 9,

01:16:17: we'll get a huge performance boost.

01:16:21: And the way we know we're going to get a significant performance boost is because we've already done this

01:16:25: for our headless client. That was the first part of this performance work,

01:16:31: is move the headless client to use .NET 8, which now is .NET 9 because they're released in version.

01:16:38: The reason we wanted to do headless first

01:16:40: is because headless already exists outside of Unity, it's not tied to it.

01:16:45: So it was much easier to do this for headless than doing this for a graphical client.

01:16:50: And headless, it pretty much shares most of this.

01:16:54: Most of the code that's doing the heavy processing is in the headless, same as on the graphical client.

01:17:01: When we made the switch and we had the community start hosting events

01:17:05: with the .NET 8 headless, we noticed a huge performance boost.

01:17:09: There's been sessions, like for example the Grand Oasis karaoke.

01:17:15: I remember their headless used to struggle.

01:17:19: When it was getting around 25 people, the FPS of the headless would be dropping down

01:17:25: and the session would be degrading.

01:17:27: With the .NET 8, they've been able to host sessions which had, I think at the peak, 44 users.

01:17:34: And all the users, all of their IK, all the dynamic bolts, all the ProtoFlux,

01:17:40: everything that's been computed on graphical client, it was being computed on headless,

01:17:46: minus obviously rendering stuff.

01:17:49: And the headless was able to maintain 60 frames per second with 44 users.

01:17:55: Which is like an order of magnitude improvement over running with mono.

01:18:06: So, doing it for headless first, that sort of let us gauge how much of a performance improvement will this switch make.

01:18:14: And whether it's worth it, do the separation as early as possible.

01:18:19: And based on the data, it's pretty much like, I feel like it's very, very worth it.

01:18:32: Where you can pull Froox Engine out of Unity, run it with a 9.

01:18:37: And then the communication will just do, instead of the communication happening within the process,

01:18:43: it's going to pretty much happen the same way except across process boundary.

01:18:52: The other benefit of this is, how do we align this?

01:18:56: Because even when we do this, once we reach this point, we'll still want to get rid of Unity for a number of reasons.

01:19:05: One of those is being like custom shaders. Those are really, really difficult to do with Unity,

01:19:10: at least making them real-time and making them support backwards compatibility,

01:19:15: making sure the content doesn't break, stuff like that.

01:19:18: Being able to use more efficient rendering methods.

01:19:21: Instead of having to rely on the third, we'll be able to use cluster forward,

01:19:29: which can handle lots of different shaders with lots of lines.

01:19:34: We'll want to get rid of Unity as well, and this whole thing,

01:19:38: where the communication between FrooxEngine, which does all the computations,

01:19:44: and then sends stuff like, please render this stuff for me.

01:19:48: Because this process makes this a little more defined,

01:19:51: we can essentially take the whole Unity, just gonna yeet it away,

01:20:00: and then we'll plug in Sauce instead.

01:20:08: So Sauce is gonna have its own things, and inside Sauce is actually gonna be,

01:20:12: right now it's being built on the Bevy rendering engine.

01:20:17: So I'm just gonna put it there, and the communication is gonna happen

01:20:20: pretty much the same way, and this is gonna do whatever.

01:20:25: So we can snip Unity out, and replace it with Sauce.

01:20:30: There's probably gonna be some minor modifications to this,

01:20:32: how it communicates, so we can build around the new features of Sauce and so on.

01:20:37: But the principle of it, by moving FrooxEngine out,

01:20:41: by making everything neatly combined, making a neat communication method,

01:20:49: is the next step.

01:20:52: There's actually the latest thing from the development of Sauce,

01:20:55: there was actually a decision made that Sauce is probably not gonna have

01:20:58: any C-Sharp parts at all, it's gonna be purely Rust-based,

01:21:02: which means it doesn't even need to worry about .NET 9 or C-Sharp Interop,

01:21:10: because its responsibility is gonna be rendering whatever FrooxEngine sends it,

01:21:15: and then maybe sending some methods back,

01:21:19: where it needs to be by the actual communication to sync stuff up.

01:21:24: But all the actual world model, all the interaction that's gonna be fully contained

01:21:31: in FrooxEngine external to Sauce.

01:21:35: Then on itself, there's gonna be a big upgrade,

01:21:38: because it's gonna be much more modern-day engine,

01:21:41: we'll be able to do things like the custom shaders, like I was mentioning.

01:21:45: There's some potential benefits to this as well,

01:21:47: because the multi-process architecture is inspired by Chrome and Firefox,

01:21:53: which do the same thing, where your web browser is actually running multiple processes.

01:22:01: One of the benefits that adds is sandboxing,

01:22:04: because once this is kind of done, we'll probably do the big move like this,

01:22:09: and at some point later in the future, we'll split this even into more processes,

01:22:15: your host can be its own process, also .NET 9, or whatever the .NET version is.

01:22:22: So this can be one world, this can be another world,

01:22:26: and these will communicate with this, and this will communicate with this.

01:22:30: And the benefit is if a word crashes, it's not gonna bring the entire thing down.

01:22:36: It's the same thing in a web browser.

01:22:38: If you ever had your browser tab crash, this is a similar principle.

01:22:43: It crashes just the tab instead of crashing the whole thing.

01:22:47: Similar thing, we might be able to, I'm not promising this right now,

01:22:50: but we might be able to design this in a way where if the renderer crashes,

01:22:55: we'll just relaunch it. You'll still stay in the world that you're in,

01:22:58: your visuals are just gonna go away for a bit and then gonna come back.

01:23:01: So we can reboot this whole part without bringing this whole thing down.

01:23:05: And of course, if this part comes down, then it's over, and you have to restart.

01:23:11: But by splitting into more modules, you essentially eliminate the possibility of crashing

01:23:19: because this part will eventually be doing relatively little.

01:23:22: It's just gonna be coordinating the different processes.

01:23:26: But for the first part, we're just gonna move Froox Engine into a separate process out of Unity.

01:23:33: That's gonna give us big benefit thanks to .NET 9.

01:23:37: There's other benefits because, for example, Unity's garbage collector is very slow

01:23:42: and very CPU heavy, but .NET 9 has way more performance as well.

01:23:48: We'll be able to utilize new performance benefits of .NET 9 in the code itself

01:23:52: because we'll be able to start using new functions within Froox Engine

01:23:56: because now we don't have to worry about what Unity supports.

01:24:01: Following that, the next big step is probably gonna be to switch to Sauce.

01:24:07: So we're gonna replace Unity with Sauce, and at some point in the future

01:24:10: we'll do more splitting for Froox Engine into more separate processes to improve stability.

01:24:17: And also add sandboxing, because once you do this, you can sandbox this whole process

01:24:23: using the operating system sandboxing primitives, which will improve security.

01:24:30: So that's the general overall plan, what we want to do with the architecture of the whole system.

01:24:38: I've been reading a lot about how Chrome and Firefox did it, and Firefox actually did a similar thing

01:24:43: where there used to be a monolithic process, and then they started doing work to break it down into less processes

01:24:50: and eventually they did just two processes, and then they broke it down into even more.

01:24:55: And we're essentially gonna be doing a similar thing there.

01:24:58: So I hope this answers it, gives you a better idea of what we want to do with performance for Resonite

01:25:08: and what are the major steps that we need to take, and also explains why we are actually reworking the particle system and audio system.

01:25:18: Because on the surface, it might seem, you know, there's, like, why we're reworking the particle and audio system

01:25:26: when we want, you know, more performance, and the reason is, you know, just so we can kind of show them

01:25:32: fully into Froox Engine, make them kind of mostly independent, like, you know, of Unity,

01:25:37: and then we can pull the Froox Engine out, and that's the major reason we're doing it.

01:25:41: The other part is, you know, so we have our own system that we kind of control, because once we also switch Unity for Sauce,

01:25:49: if the particle system was still in Unity, Sauce would have to reimplement it, and it would also complicate this whole part.

01:25:56: Because, like, now we have to, like, synchronize this particle system with all the details of the particle system on this end.

01:26:05: So, that's another benefit. But, there's also some actual performance benefit, even just from the new particle system.

01:26:15: Because the new particle system is designed to be asynchronous.

01:26:19: Which means if you do something really heavy, you're only going to see the particle system, like, and you will not like as much.

01:26:25: Because the particle system, if it doesn't finish its computations within a specific time, it's just going to skip and render the previous state.

01:26:37: And the particle system itself will lag, but you won't lag as much. So that should help improve your overall framerate as well.

01:26:44: So, that's pretty much the gist of it. The particle system is almost done. We'll probably start testing this upcoming week.

01:26:55: The audio system, that's going to be the next thing. After that, it's going to be the interface with Unity.

01:26:59: Once that is done, then the pull happens into the separate process, which is going to be relatively simple process at that point.

01:27:06: Because everything will be in place for the pullout from Unity to happen.

01:27:13: So, hopefully this gives you a much better idea.

01:27:17: And if you have any questions about it, feel free to ask. We're always happy to clarify how this is going to work.

01:27:24: I'm going to go back. Boink. There we go.

01:27:31: We're going to sit down. There we go.

01:27:36: So that was another of those kind of rabbit hole questions. I kind of did this explanation on my first episode.

01:27:43: But I kind of wanted to do it because I have an ultimate better setup with the right things.

01:27:48: So we can also make a clip out of it so we have something to refer people to.

01:27:54: But that's the answer to Nitro's question. I'm also checking the time. I have about 30 minutes left.

01:27:59: So let's see. There's a few questions but I think we can get through them.

01:28:06: It's actually kind of working out. I've been worried a little bit because I'm taking a while to answer some of the questions and going on tangents.

01:28:13: But it seems to kind of work out with the questions we have.

01:28:18: So next, ShadowX. Does all common spotting software encode... Oh, I already answered that one.

01:28:27: So VM is kind of like optimization layer rather than something akin to CLR or Chrome V8.

01:28:33: So it has the fundamentals of a VM but the goal is just to resign ahead of time what needs to be run by playing quickly.

01:28:39: It's the same general principle as the CLR or Chrome's V8.

01:28:45: VM is essentially just an environment in which the code can exist and in which the code operates.

01:28:51: And the way the VM runs that code can differ. Some VMs can be purely interpreted.

01:28:59: You literally now maybe just have a switch statement that is just switching based on instruction and doing things.

01:29:05: Maybe it compiles into some sort of AST and then evaluates that.

01:29:11: Or maybe it takes it and actually emits machine code for whatever architecture you're running on.

01:29:16: There's lots of different ways for VM to execute your code.

01:29:20: So the way ProtoFlux executes code and the way CLR or V8 executes code is different.

01:29:28: I think actually V8 does a hybrid where it converts some parts into machine code and some it interprets.

01:29:37: But it doesn't interpret the original typed code. It interprets some of the abstract syntax.

01:29:43: I don't fully remember the details but I think V8 does a hybrid where it can actually have both.

01:29:51: CLR I think always translates it to machine code but one thing they did introduce with the latest versions is they have multi-tiered JIT compilation.

01:30:05: One of the things they do is when your code runs they will JIT compile it into machine code which is actually native code for a CPU.

01:30:16: And they JIT compile it fast because you don't want to be waiting for the application to actually run.

01:30:26: But that means they cannot do as many optimizations.

01:30:28: What they do though is when the JIT compiler makes that code that's done in a very quick way so it's not as optimal, it has a counter each time a method is called.

01:30:41: And if it crosses a certain threshold, say the method gets called more than 30 times, it's going to trigger the JIT compiler to compile a much more optimized version.

01:30:51: It goes really heavy on the optimizations to make much more faster code which is going to take it some time.

01:31:00: But also in the meanwhile, as long as it's doing it, it can still keep running the slow code that was already JIT compiled.

01:31:06: Once the JIT compiler is ready, it just swaps it out for the more optimized version and at that point your code actually speeds up.

01:31:17: So we have code that's being called very often, like the main game loop for example, it ends up compiling it in a very optimal way.

01:31:26: You have something that runs just once, like for example some initialization method.

01:31:30: Like when you start up the engine, there's some initialization that only runs once.

01:31:34: It doesn't need to do heavy optimizations on it because they're just a waste of time.

01:31:38: It speeds up the startup time and it kind of optimizes for both.

01:31:41: And I think the latest version, it actually added, I forget the term for it they used,

01:31:50: but it's essentially like in a multi-stage compilation where they look at what are the common arguments for a particle method

01:31:59: and then assume those arguments are constants and they will compile a special version of the method with those arguments as constants,

01:32:07: which lets you optimize even more because now you don't have to worry can this argument be different values and you have to do all the math.

01:32:14: It can pre-compute all that math that is dependent on that argument ahead of time so it actually runs much faster.

01:32:21: And if we have a method that is oftentimes called with very specific arguments, it now runs much faster.

01:32:28: And there's actually another VM that did this called the LuaJIT, which is like runtime for the Lua language.

01:32:39: And what was really cool about that one is like, even for Lua, it's just considered this kind of scripting language.

01:32:47: LuaJIT was able to outperform languages like C and C++ in some benchmarks.

01:32:53: Because with C and C++, all of the code is compiled ahead of time.

01:32:59: So you don't actually know what kind of arguments you're getting.

01:33:02: What LuaJIT was able to do is be like, okay, this value is always an integer.

01:33:08: Or maybe this value is always an integer that's number 42.

01:33:14: So I'm just going to compile a method that assumes this is 42.

01:33:19: And it makes a super optimal version of the method.

01:33:23: And that runs. It's even more optimized than the C and C++.

01:33:28: Because C and C++ cannot make those assumptions.

01:33:32: There's some profiling compilers that actually run your code.

01:33:38: And they will try to also figure it out.

01:33:40: And then you compile your code with those profiling optimizations.

01:33:45: And can do some of that too.

01:33:47: But it shows there's some benefits to the JIT compilers where they can be more adaptive.

01:33:57: And they can do it for free.

01:33:59: Because you don't have to do it as a PowerView development process.

01:34:05: Because once you upgrade it in your system, you get all these benefits.

01:34:11: And it's able to run your code that's the same exact code as you had before.

01:34:17: It's able to run it much faster because it's a lot smarter how it converts it into machine code.

01:34:24: So next question is, Nitra is asking, how about video players?

01:34:29: Are they already full inside Froox Engine as well?

01:34:31: No, for video players they actually exist more on the Unity side.

01:34:36: And they are pretty much going to remain majorly on the Unity side.

01:34:41: The reason for that is because the video player has a very tight integration with the Unity engine

01:34:46: because it needs to update the GPU textures with the decoded video data.

01:34:52: It adds a bit to the mechanism because we will need to send the audio data back for the Froox Engine to process and send back.

01:35:01: So that's going to be...

01:35:05: Video players are essentially going to be considered as an asset, because even stuff like textures.

01:35:10: When you load a texture, it needs to be uploaded to the GPU memory, it needs to be sent to the renderer.

01:35:15: And the way I plan to approach that one is through a mechanism called shared memory,

01:35:20: where the texture data itself, Froox Engine will allocate in a shared memory,

01:35:24: and then it will pass over the pipe, it will essentially tell the renderer,

01:35:30: here's shared memory, here's the texture information, the size, format, and so on,

01:35:36: please upload it to the memory under this handle, for example.

01:35:39: And it assigns it some kind of number to identify the texture.

01:35:45: And essentially sends that over to the unit engine, the unit engine is going to read the texture data,

01:35:50: around the upload to the GPU, and it's going to send a message back to Froox Engine,

01:35:54: be like, you know, texture number 420 has been uploaded to the GPU.

01:35:59: And Froox Engine knows, OK, this one's now loaded.

01:36:02: And then when it sends it, please render these things, it's going to be like,

01:36:05: OK, render this thing with texture number 422.

01:36:09: And it's going to send it as part of its package to Unity, and the unit will know,

01:36:13: OK, I have this texture and this number, and it's going to prepare things.

01:36:20: It's going to be similar thing with video players, where the playback and decoding

01:36:23: happens on the Unity side, and Froox Engine just sends some basic information

01:36:28: to update it, be like, the position should be this and this,

01:36:34: you should do this and this with the playback, and it's going to be sending back some audio data.

01:36:39: But yeah, those parts are going to remain within Unity.

01:36:48: We have like 25 minutes, and there's only one question right now.

01:36:56: Chief item, I guess something I don't really understand about Resonite

01:36:59: is where simulation actually happens.

01:37:02: Does the server simulate everything and the clients just pull,

01:37:04: or do the clients do some work and send it to the server? It's a mix.

01:37:07: For example, players on IK, Local, ProtoFlux.

01:37:10: Or do all servers and clients simulate everything?

01:37:12: Local, as defined in ProtoFlux, can get pretty confusing pretty fast.

01:37:17: So usually it's a mix.

01:37:19: But the way Resonite works, or FrooxEngine works,

01:37:25: is by default, everything is built around your data model.

01:37:29: And by default, the data model is implicitly synchronized.

01:37:33: Which means if you change something in the data model,

01:37:35: FrooxEngine will replicate it to everyone else.

01:37:41: And the way most things, like components and stuff works,

01:37:44: is the data model itself, it's sort of like an author on things.

01:37:51: It's like the data model says, this is how things should be.

01:37:55: And any state that ends up representing something,

01:38:01: like it represents something out of visual, some state of some system,

01:38:05: that's fully contained within the data model.

01:38:08: And that's the really important part.

01:38:09: The only thing that can be local to the components by default

01:38:13: is any caching data, or any data that can be deterministically

01:38:20: computed from the data model.

01:38:23: So if the data model changes, it doesn't matter what internal data it has,

01:38:26: the data model says things should be this way.

01:38:32: And then the whole synchronization is built on top of that.

01:38:35: The whole idea is by default, you don't actually have to think about it.

01:38:39: The data model is going to handle the synchronization for you.

01:38:43: It's going to resolve conflicts.

01:38:44: It's going to resolve if multiple people change a thing,

01:38:48: or if people change a thing they're not allowed to,

01:38:50: it's going to resolve those data changes.

01:38:55: And you essentially build your chain to respond to data.

01:38:58: So if your data is guaranteed to be synchronized and conflict resolved,

01:39:03: then the behaviors that depend on the data always lead to the same,

01:39:07: or at least convergent, results.

01:39:11: What this does, it kind of gives you benefits to write systems in all different ways.

01:39:18: But the main thing is, you don't have to worry about synchronization.

01:39:25: It just kind of happens automatically.

01:39:29: And it kind of changes the problem where instead of...

01:39:34: Instead of like...

01:39:42: Instead of things being synced versus non-synced being a problem,

01:39:49: it turns it into an optimization problem.

01:39:52: Because you could have people computing multiple kinds of things,

01:39:56: or computing it on the wrong end, things that can be computed from other stuff.

01:40:00: And you end up wasting network traffic as a result.

01:40:04: But for me, that's a much better problem to have than things getting out of sync.

01:40:10: What we do have is the data model has mechanisms to optimize that.

01:40:14: One of those mechanisms being drives.

01:40:18: Drives essentially tell...

01:40:20: The drive is a way of telling the data model,

01:40:22: I'm taking control of this part of the data model.

01:40:26: Don't synchronize it.

01:40:27: I am responsible for making sure this stays consistent when it needs to.

01:40:37: And the way to think about drives is you can have something like a SmoothLARP node.

01:40:41: Which is one of the community favorites.

01:40:44: And the way that works, it actually has its own internal computation that's not synchronized.

01:40:48: Whatever inputs the SmoothLARP is, is generally synchronized.

01:40:53: Because it comes from a data model, but the output doesn't need to be synchronized

01:40:57: because it's convergent.

01:40:59: So you're guaranteed to have the same value on the input for all users,

01:41:04: you can fully compute the output value on each user locally.

01:41:08: Because it's all converging towards the same value.

01:41:12: And as a result, everybody ends up with, if not the same,

01:41:18: at least very similar result on their end.

01:41:22: It is also possible, you can, if you want to, diverge this data model.

01:41:27: For example, value, user override, it does this.

01:41:32: But it does this in an interesting way, because it makes the divergence of values,

01:41:36: it actually makes it part of the data model, so the values that each user is supposed to get

01:41:41: is still synchronized, and everybody knows this user should be getting this value.

01:41:47: But the actual value that you derive with it, that's diverged for each user.

01:41:51: And it's like a mechanism built on this principle to handle this kind of scenario.

01:41:57: You can also sometimes derive things from things that are going to be different on each user,

01:42:02: and have each user see a different thing. You can diverge it.

01:42:06: The main point is, it's a deliberate choice.

01:42:10: At least it should be in most cases, unless you do it by accident.

01:42:13: But we'll try to make it harder to do it by accident.

01:42:21: If you don't specifically make this thing desync, it's much less likely to happen by accident.

01:42:30: Either by bug or misbehaviour.

01:42:32: The system is designed in a way to make sure that everybody shares the same general experience.

01:42:41: Generally, if you for example consider IK, like you mentioned IK.

01:42:48: IK, the actual computations of the bones, that's computed locally.

01:42:54: And the reason for that is because the inputs to the IK is the data model itself which is synchronised.

01:43:01: And the real-time values are hand positions, if you have tracking, it's feet position, head position.

01:43:07: Those are synchronised.

01:43:09: And because each user gets the same inputs to the IK, the IK on everybody's end ends up computing the same or very similar result.

01:43:19: Therefore the actual IK itself doesn't need to be synchronised.

01:43:23: Because the final positions are driven.

01:43:27: And essentially that is a way of the IK saying, if you give me the same inputs, I'm going to drive these bone positions to match those inputs in a somewhat mostly deterministic way.

01:43:42: So that doesn't need to be.

01:43:45: You also mentioned local protoflux.

01:43:47: With local protoflux, there is actually a way for you to hold some data that's outside of the data model.

01:43:54: So locals and stores, they are not synchronised.

01:43:58: If you drive something from those, it's going to diverge.

01:44:03: Unless you take the responsibility of computing that local in a way that's either convergent or deterministic.

01:44:11: So locals and stores, they're not going to give you a synchronous mechanism.

01:44:15: One thing that's missing right now, what I want to do to prevent divergence by accident, is have a localness analysis.

01:44:23: So if you have a bunch of protoflux and you try to drive something, it's going to check.

01:44:31: Is that the source of this value? Is there anything that is local?

01:44:38: And if it finds it, it's going to give you a warning.

01:44:40: You're trying to drive something from a local value.

01:44:42: Unless you make sure that the results of this computation stay synchronized, even if this value differs.

01:44:54: Or unless you make sure that the local value is computed the same for every user or is very similar, this is going to diverge.

01:45:02: And that will make it a much more deliberate choice.

01:45:08: Where you're like, OK, I'm not doing this by accident, I really want to drive something from a local value.

01:45:13: I'm taking the responsibility of making sure this will match for users.

01:45:18: Or if you have a reason to diverge things for each user, it's a deliberate choice.

01:45:24: You're saying, I want this part of the data model to be diverged.

01:45:30: So that kind of answers the question. There's a fair bit I can do to this.

01:45:35: But the gist of it, the local in ProtoFlux, it literally means this is not part of the data model.

01:45:43: This is not synchronized for you. This is a mechanism you use to store data outside of the data model.

01:45:49: And if you feed it back into the data model, you sort of need to kind of take responsibility for making sure it's either convergent.

01:45:59: Or intentionally divergent.

01:46:02: The other part is, this also applies if you drive something.

01:46:06: Because if you use the local and then you drive the value and the value is not driven,

01:46:10: the final value right into the data model ends up implicitly synchronized and you don't have the problem.

01:46:17: So this only applies if you're driving something as well.

01:46:22: So I hope that kind of helps understand this a fair bit better.

01:46:26: Just about 14 minutes left. There's still like one question.

01:46:30: I'll see how long this kind of takes to answer, but at this point I might not be able to answer all the questions if more pop up.

01:46:39: But feel free to ask them, I'll try to answer as many as possible until I get it in the full two hours.

01:46:47: So let's see.

01:46:49: Ozzy is asking, you mentioned before wanting to do cascading asset dependencies after particles, is this something you want to do?

01:46:56: It is an optimization that's sort of independent from the whole move, that I feel could be fast enough.

01:47:05: I still haven't decided, I'm kind of thinking about slotting in before the audio as well, as part of the performance optimizations,

01:47:12: because that one will help, particularly with memory usage, CPU usage, during loading things.

01:47:21: For example, when people load into the world, and you get the loading lag as the stuff loads in.

01:47:29: The cascading asset dependencies, they can particle hell with those cases when the users have a lot of stuff that's only visible to them,

01:47:39: but they know not everyone else, because right now you will still load it.

01:47:46: If you have wars that are cold, or maybe the users are cold, you'll load all of them at once, and it's just this kind of big chunk.

01:47:53: With this system, it will be kind of more spread out.

01:47:57: The part I'm not certain about is whether it's worth doing this now, or doing this after making the .NET 9 switch, because it is going to be beneficial on both.

01:48:06: So it's one of those optimizations that's smaller, and it's kind of independent of the big move to .NET 9.

01:48:14: And it could provide benefit even now, even before we move to .NET 9.

01:48:19: And it will still provide benefit afterwards.

01:48:22: I'm not 100% decided on this one, I'll have to evaluate it a little bit, and evaluate how other things are going.

01:48:30: It's something I want to do, but no hard decision yet.

01:48:38: Next, Climber is asking, could you use the red highlighting from broken ProtoFlux in a different color for local computation?

01:48:46: I don't really understand the question, I'm sorry.

01:48:50: If ProtoFlux is red and it's broken, you need to fix whatever is broken before you start using it again.

01:48:58: Usually if it's red, there's something wrong I cannot run at all.

01:49:06: I don't think you should be using it anyway. If it's red, you need to fix whatever the issue is.

01:49:16: Next, Nitr is asking, are there any plans to open-source certain parts of the code, for example, under the components, and ProtoFlux knows so that the community can contribute to those?

01:49:27: There's some plans, nothing's fully formalized yet.

01:49:32: We've had some discussions about it.

01:49:35: I'm not going to go too much into details, but my general approach is we would essentially do gradual open-sourcing of certain parts, especially ones that could really benefit from community contributions.

01:49:51: One example I can give you is, for example, the Importer and Exporter system, and also the Device Driver system, where it's very ripe for open-sourcing

01:50:08: where we essentially say, this is the Model Importer, this is the Volume Importer, and so on, and this is support for this format and this format.

01:50:19: And we cannot do this. We make the code open, which we offer community contributions, so people could contribute stuff like fixes for formats or some things importing your own.

01:50:33: Or alternatively, we want to add support for some obscure format that we wouldn't support ourselves, because you're modding some kind of game or something and you want to mess with things.

01:50:48: You now can use the implementation that we provided as a reference, add another Importer, Exporter.

01:50:58: Or, if you want to, you need very specific fixes that are relevant to the project you're working on, you just make a fork of one of ours, or even communities, and modify it for its purposes.

01:51:10: And you make changes that wouldn't make sense to have in the default one, but that are useful to you.

01:51:16: So, that's probably another kind of model I would want to follow, that is initially, we open source things partially, where it makes sense, where it's also easy to do, because open sourcing can be a complicated process if you want it for everything, because there's pieces of code that have certain licensing, and we need to make sure it's all compatible with the licensing, we need to make sure everything's audited and cleaned up.

01:51:44: So doing it by chunks, doing just some systems, I feel it's a much more easier, approachable way to start, and we can build from there.

01:51:55: The other part of this is, when you do open source something, you generally need maintainers.

01:52:03: Right now we don't really have a super good process for handling community contributions for these things, so that's something I feel we also need to heavily improve.

01:52:16: And that means we need to have prepared some manpower to look at the community pull requests, make sure we have a good communication there, and make that whole process run smoothly.

01:52:26: And also there's been some PRs that piled up against some of our projects, some of our open source parts, and that we haven't really had a chance to prepare to look at because everything has been kind of busy.

01:52:39: So I'm a little bit hesitant to do it now, at least until we clear up some more things and we have a better process.

01:52:47: So it is also part of the consideration there.

01:52:51: But overall, it is something I'd want to do. I feel like you as a community, people are doing lots of cool things and tools.

01:53:00: Like the modding community, they do lots of really neat things.

01:53:06: And doing the gradual open sourcing, I feel that's a good way to empower you more, to give you more control over these things, give you more control to fix some things.

01:53:16: These parts, we're a small team, so sometimes our time is very limited to fix certain niche issues.

01:53:24: And if you give people the power to help contribute those fixes, I feel like overall the platform and the community can benefit from those.

01:53:34: As well as giving you a way to...

01:53:38: A big part of Resonite's philosophy is giving people as much control as possible.

01:53:43: Making the experience what you want it to be.

01:53:46: And I feel by doing this, if you really don't do something, or how Resonite handles certain things, you can afford that part and make your own version of it.

01:54:00: Or fix up the issues. You're not as dependent on us as you otherwise would have been.

01:54:07: The flipside to that, and the part that I'm usually worried about, is we also want to do it in a way that doesn't result in the platform fragment.

01:54:19: Where everybody ends up on a different version of the build, and then you don't have this shared community anymore.

01:54:27: Because I feel, especially at this stage, that can end up hurting the platform.

01:54:33: If it happens, especially too early.

01:54:36: And it's also one of the reasons I was thinking of going with importers and exporters first.

01:54:42: Because they cannot cause fragmentation, because they do not fragment the data model.

01:54:49: This just changes how things are brought into, or out of, the data model.

01:54:57: It doesn't make you incompatible with other clients.

01:55:01: You can have importers for data formats that only you support, and still exist with the same users.

01:55:08: And be able to join the same sessions.

01:55:11: That's pretty much on the whole open-sourcing kind of thing.

01:55:16: And that's the reason I want to approach it this way.

01:55:19: Take baby steps there, see how it works, see how everybody responds.

01:55:24: See how we are able to handle this, and then be comfortable with this.

01:55:32: We can now take a step further, open-source more parts.

01:55:37: And just make it a gradual process, then just a big flip of a switch, if that makes sense.

01:55:48: We have about 4 minutes left, there's no more questions.

01:55:52: So I might... I don't know if this is enough for Ramble, because right now...

01:56:01: I don't know what I would ramble about, because...

01:56:05: I want to do more Grasslands plus? I've already rambled about Grasslands plus a fair bit.

01:56:10: If there's only less many questions, I'll try to answer it, but I might also just end up a few minutes early.

01:56:15: And as I end up rambling about rambling, which is some sort of meta rambling, which I'm kind of doing right now...

01:56:23: I don't know, I'm actually kind of curious, like, with the Grasslands part thing, is that something you'd like to see? Like, you'd like to play with?

01:56:34: Especially if you can bring stuff like this.

01:56:36: I can show you... I don't have, like, too many videos of those. I don't have, like, one more... no, wait.

01:56:45: Oh! I do have actually one video I didn't want to show. I do need to fetch this one from YouTube, because I haven't, like, imported this one. Hang on.

01:57:00: Let's see... Oh, so, like... Actually, let me just do this first before I start doing too many things at once.

01:57:08: There we go.

01:57:11: So I'm going to bring this one in.

01:57:18: Once it loads... This one's from YouTube, so it's, like, actually a little bit worse quality.

01:57:24: But this is another Gaussian Splatter. Well, I have lots, but I need to make videos of more.

01:57:30: This one I found, like, super neat, because, like, this is a very complex scene. This is from GSR at BLFC.

01:57:37: You can see, like, it captures a lot of, like, kind of cool detail, but there's a particle part I want you to pay attention to.

01:57:44: I'm going to point this out in a sec, because one of the huge benefits of Gaussian Splatters is they're really good at not only, you know, soft and fuzzy details, but also, you know, semi-transparent stuff.

01:57:56: So, watch this thing. Do you see, like, these kind of plastic, well, not plastic, but, like, these transparent, you know, bits? Do you see these?

01:58:05: Look at that. It's actually able to, you know, represent that, like, really well, which is something you really don't get, you know, with a traditional mesh-based, you know, photogrammetry.

01:58:16: And that's actually one of the things, you know, like, if you wanted to represent it as a mesh, you'd kind of lose that.

01:58:22: And that's why, you know, why Gaussians are really good at, you know, other presenting scenes that directional photogrammetry is not.

01:58:32: And there's also lots of, like, you know, splats I've done, like, this October, I was, like, visiting the US, and I went to the Yellowstone National Park again.

01:58:42: I have lots of scans from there.

01:58:45: And, like, you know, a lot of them kind of have, like, you know, some, because there's a lot of geysers and everything, and there's actually, you know, there's, like, steam coming out of the geysers, you know, and there's, like, you know, water, so, like, it's reflective in some places.

01:58:56: And I found, with Gaseous Plus, it actually reconstructs pretty well, like, it even captures, you know, some of the steam and air, and gives it, you know, more of a volume.

01:59:06: So they're, like, a really cool way, you know, of, like, representing those scenes, and I just kind of want to be able to, you know, bring that in and be, like, you know, publish those on Resonite, and be, like, you know, you want to come see, like, you know, bits of, like, you know, Yellowstone, bits of this and that, you know, like, just go to this world and you can just view it and show it to people.

01:59:26: But yeah, that actually kind of filled the time, actually, because we have about 30 seconds left, so I'm pretty much going to end it there.

01:59:36: So thank you very much for everyone, you know, for your questions, thank you very much for, like, you know, watching and, you know, and listening, like, my ramblings and explanations and going off tangents.

01:59:47: I hope you, like, you enjoyed this episode and thank you also very much, you know, for just, like, supporting Resonite, whether it's, like, you know, like, on our Patreon, you know, whether it's just by making lots of cool content or, you know, sharing stuff on social media, like, whatever, whatever you do, you know, it, it helps the platform and, like, you know, we appreciate it a lot.

02:00:16: Warp.