This is a transcript of The Resonance from 2024 December 1.
0:16: W hello well
0:23: BL I Lally clicked under minute hello J ven for
0:33: hello [Music] everyone I'm just making sure everything is running good all the stream
0:51: stuff make sure my AIO is all good can you hear me fine what I was L by 1.3 seconds also
0:59: thank you for the cheer Nitra it was
1:04: quick okay so we should be going uh let me make
1:11: sure Channel there we go everything should be going so hello
1:16: everyone oh oh boy I'm getting feedback there we go um hello everyone I'm Fus and welcome
1:24: to the third episode of the resonance oh my God thank you so much for the cheers uh emerence and temporal shift hello
1:31: everyone um so um this is the third episode of the resonance uh it's um
1:36: essentially like a combination of uh office hours and podcasts uh where you can ask anything about
1:43: um uh you know like anything about the resite like whether it's development you know philosophy how things are kind of
1:49: going for development its past its future where it's heading uh the goal is to have like a combination of Q&A so we
1:56: can ask you know questions uh whatever like you'd like to know I to answer to best of my
2:01: ability um and we also have uh uh hearing that the microphone sounds
2:08: window let me double check uh that OBS didn't switch microphone on me again uh test one to it's in front of my
2:17: face properties yeah it's using the right one um is it
2:24: understandable it's really strange like it should be compressing
2:30: like I'm it's a wireless microphone but it's like it's un custom thing uh but anyway uh let me know if
2:37: the voice is like okay if it's understandable um anyway so uh you free
2:43: to ask any questions and I'm also like do kind of you know some kind of like General kind of talking about uh you
2:49: know the high level concepts of resonite where it's past it's what its future you know it's going to be which direction we
2:56: want to head it and so on um uh one thing if you want want to ask questions make sure um make sure to put a question
3:04: mark actually let me double check oh I didn't save the version with the auto add uh AO pin there we go okay now it
3:13: should be working um make sure to end your question with a question mark that way it kind of pops on my thing um and
3:21: we already have some like popping up so with that uh you know we should be good to get started so might switch this
3:27: camera just like this there we go so hello everyone uh hello um hello B at
3:33: name SK hello try BG hello Le um so we actually got first question from lexo uh
3:41: do you think you'll have any other guests on your show that might focus on specific topics um is possible like the
3:48: goal of this kind of is um kind of like you know um sort of like like my kind of
3:55: office hours so that like this probably going to be like the main focus um but I'm also kind of you know playing with
4:01: the format a bit unfortunately C couldn't like I usually have C hosting because we canot have good back and for
4:07: that's on the technical things and sort of like you know the philosophy ver night
4:13: um um but I kind of like I don't see where it kind of goes because I had like some ideas and for the first two streams
4:21: that like I did it was like so many questions that we didn't really get much to the chill parts of it uh so want to
4:26: kind of like explore that uh we might have like some like other people as well kind of talk to them about like specific
4:32: STS I but I don't have any super specific plans just yet
4:37: um um so we'll kind of see like I want I want at least for start I kind of want
4:43: to keep things simple you know take it like essentially with baby steps um but I I feel like probably like
4:51: I at some point I'll start like bringing like more people as well so we can kind of you know talk about like how some
4:56: stuff has been going and so on but uh I I'm stucking you know figuring things
5:03: out um so the next question is from emergency temporal shift uh what is the
5:10: funnest particle it kind of depends like I don't know if if you can get like a
5:16: maybe like the thing that comes to mind is like you know particles that would make sounds but that's one of the things
5:21: you cannot do right now um um you m like a particle like you know does like some
5:26: kind of fun plot sound it goes like boink every time is like you know bounces or something like that um
5:34: there's it's actually kind of interesting thing because like it's a reest we got in the past is you know I'm
5:39: kind of going of attention right now already um where you can um or people want to like you know
5:46: particles to make sound when they collide with something or they generally want event so they can react the only
5:51: thing is particles the simulation RS locally on each person uh so it's not
5:57: like 100% insane the people will be seeing similar things but not exactly
6:03: the same so like for you have you know clump of particles and one goes like um
6:08: one might you know go like this way and for the other person it goes this way so if you have like one person handling the
6:14: events then something might happen you know so like say like for me the particle hits here and for the other
6:19: person it hits here so if you do some event at this point then it's going to be like a bit of a disconnect for other
6:26: users because uh for them the particle hit over here or maybe just missed um
6:32: and it's kind of interesting problem and one way to kind of approach that kind of thing is um make sure effects are only
6:38: things that are local so like for example local sound that everybody you know say if I do like bubbles and if you
6:44: like you know bup them and they pop you will hear the pop sound and it's going to be kind of localized mostly to you uh
6:51: and if it's like similar enough like it's it's just going to be close and of the people will not notice um but it's it's it's it's a
6:59: serious problem and kind of a little bit of tension for the question um so the next one we have J
7:08: Von 4 is asking I was confused by the stress test the announcement said it was R on net 8 as opposed to net 9 was there
7:15: a typo or was meant to establish a baseline um so was kind of Nether um we
7:21: uh we have like net n headless ready uh we kind of wanted to like you know push it like already like give people time to
7:28: prepare um but the team running the event they decided to go with the net 8
7:34: uh you know for the testing and the main test wasn't even the per like performance of the headlight itself it
7:40: was make sure that the hardware is running on and the connections is running on is able to handle the event
7:46: so the focus wasn't as much you know testing the performance of the Headless itself but more kind of combination of
7:52: hardware and make sure we are ready for the event you know the setup we have and was also like one of the reasons is
7:58: because we want the event to be as Flawless as possible um the team decided
8:04: to go with um you know with net 8 uh because the net n is um you know it
8:11: technically isn't released yet uh so we kind of know stuck with that even though like the like you know the probability
8:16: of it breaking things is very low we just wanted to like even if it was like you know 1% chance that something might
8:23: break we wanted to eliminate that 1% um GL VR is asking is there in focus
8:30: on actual tutorial and there not new user suffer from not understanding the dashboard how to find avatars and how to
8:36: find people this seems like an easy thing that can be delegated out and does not need to take up an bound with yes
8:42: this actually something that the content team is lar working on uh they've been kind of like looking at you know ways to
8:47: kind of improve the tut experience and overhaul some parts of it so even even the existing experience that we have has
8:53: already been mostly delegated um it's kind of part of that because
8:58: like getting new users to the platform it crosses lots of different systems
9:04: because when you start up um you know we want to make sure that uh you know the user has an account that like their
9:11: audio is working uh so we kind of you know guide them through like the few initial steps and there a part that's
9:16: more you know on the coding side um so you know if if there's problems with
9:22: that part or there's something we need to toate part then need to be handled by you know our engineering team then the
9:28: user gets brought into the tutorial that explains things and it's mostly hunted by the content team and um we also had
9:34: like Ariel who's our like um um who like new like person like like handling L of
9:39: marketing stuff and development like you know Communications and so on uh she's been kind of like looking because she's
9:46: relatively new toite as well so she's been kind of using that perspective to be like you know this is what we could
9:52: do to smooth out you know the initial user experience and she's been talking with a Content team and we're kind of
9:58: looking you know how how do we um how do we improve their experience to reduce
10:04: frustrations for new users uh and those are like definitely a lot of things you could do uh the only part is like it's
10:10: always difficult like I I would say like you know it's a it's a simple thing um
10:17: to do because you always have to kind of balance how much information do you give the user at once and in the past we
10:24: tried approaches you know where we just we told them about everything you know there's like inventory there's contacts there's you know this thing and this
10:30: thing and this thing and what we found ends up happening a lot of users they just they get overwhelmed and they just
10:37: kind of shut down and then they like you know they they will not even understand the basic bits like they will not know how to like you know how to grab things
10:44: that how to switch like locomotions so you kind of have to you know ease the
10:51: users into it kind of you know build like simple interactions and building those kinds of um building those kinds
10:57: of tutorials it takes a fair bit of time time there's other aspects of this as well for example one of the things that
11:03: people want to do when they come in here they want to set up their Avatar um and the tricky thing with that
11:10: is uh that requires you know use of advanced tools like the developer tools and so on it requires use of the Avatar
11:16: Creator and the Avatar Creator is something we want to kind of Bri write but that's an engineering task you know
11:22: that's not something that content team can do right now um so there's there's sort of kind
11:28: of like aspects to this is like where they can build better tutorial for some things but some things do require some
11:34: engineering work um and we can have to like you know design things in a way
11:39: that uh we don't also want to like avoid wasting like too much effort on certain things because we know you know we're
11:45: going to rework stuff like the UI like you in the inventory UI is going to be reworked at some point so then it
11:52: becomes question how much time do we invest into tutorial for the current one when we like you know going to replace
11:57: it so some of those parts you know we might just do like a simple tutorial and do like better one later on once the
12:05: engineering task going of come through so there there's just a lot of you know
12:10: complexities to these kinds of things and um it it takes time to kind of like improve them what helps us the most you
12:18: know getting information about what are the particle frustration points for new users if somebody new comes to the
12:25: platform what do they get stuck on you know what do they want to do like like what's their motivation because if we
12:31: even know okay the user wants to set up their Avatar we can be like okay we're going to put things that kind of direct
12:36: them in the right direction um but also like you know with the Avatar setup there's there was kind
12:44: of combination like you know how do we make the tooling simpler so we don't need as much tutorial for it uh because
12:49: one of the things we did uh a few months back is introduce the resonite packages and if the ex there exists aite package
12:56: you know for the Avatar that the user wants to use they just track and drop it makes the whole process much simpler we
13:01: don't have to like you know explain how to use the developer tool how to use the material tool you literally just kind of
13:07: track and drop and you have like you know simple interface but that doesn't work you know in 100% of the cases so
13:13: it's it's a particular kind of like you know challenging problem but yes we it's
13:18: it's something we do talk about like on the team um it's something like that's important to us you know we want to ease
13:25: in like the onboarding of new users make them as comfortable as possible and and we kind of like you know working at it from different fronts like both from
13:32: engineering and uh from the content side as well as you know marketing and uh
13:38: Communications uh let's see so Jac Fox author is asking my question for today
13:45: is about protox in what direction do you want the language to evolve going forward what future language features
13:50: are you looking forward to so there's like a bunch um there's actually um um
13:57: how do I put it the the funny thing about it is like the if if I look at the actual you know
14:03: future of like scripting resonite is actually not just protox one of the
14:11: um um one of the like you know things about um the way I kind of view like you know
14:17: visual script thing is that it's it's a very it has like its drawbacks and has
14:23: his benefits and what of dryb is like when you write you know really complex you know when you write really complex
14:29: behaviors it gets a lot harder to manage where you know typical you know text Bas
14:35: programming language might be simpler but one of its benefits is like you literally it's very you know handson you
14:41: literally you know drag wires if I want to you know control this lights I just you know pull out things from this and I
14:46: drag wires and it has a very handson kind of feeling it's very
14:52: spatial and the way I can imagine the optimal you know way for
14:58: this to work is is to actually be combined with a you know more typical
15:03: text based like a programming language but if we have like a lot of like heavy logic you know like a lot of kind of
15:09: complex behaviors um it's
15:15: um like you know much much simpler to kind of like you know like code things
15:20: like that way but then if you want to like you know why like why are those complex behaviors into the world that's
15:26: where you know visual scripting can come in handy and I think the we'll get the most strength by combining both and the
15:34: way I wanted to approach you know the typical text based programming is by integration of web assembly uh which
15:41: will uh uh essentially allow you to like use lots of different languages even languages like CN C++ using you know um
15:50: with with those you can you know bring support for other languages like you know Lua python you know lots of other
15:56: languages very a little of a complex code and then some of that code might be exposed you know as a node and that node
16:02: you kind of wire into other things you do like you know maybe little extra operations uh it's almost like if you're
16:07: familiar with electronics it's almost like having like you know integrated circuit and integrated circuit you know
16:13: it it it has a lot of the complex logic and it could be you know written in um typical language you know compil to
16:19: assembly module um and then like you know the the Integrity circuit is going to have you
16:25: know a bunch of extra things around it it's kind of you know wired into input puts and outputs and make it easier to
16:31: like you know interface with things um so to me that's like you know the most
16:37: optimal State like where we have both and you can combine them in a way where
16:43: you get the strengths you know of each um and weaknesses of either essentially
16:49: that said there are definitely things we can do to improve Proto flux the two big
16:55: things I'm particular looking forward to are nested not uh those will let you you know kind of
17:01: create like essentially package like functions you'll be able to like Define um if I I kind want to draw this one in
17:09: so let grab my pen I should probably done this at the start but
17:17: um I kind of forgot um let's see I try
17:22: to use this one I'll see if I move to the if I end up moving this is probably
17:27: going to be too noisy visually um I'm going to get up and let's try this I'm
17:33: going to move this over
17:38: here hello so um make sure I'm not like colliding with
17:45: anything so like the idea is you you essentially Define you know a node where
17:51: have like you know your set of inputs and this kind of like you know my
17:57: kind of thinking for like the kind of interface so this would be you know your inputs so for example you can have you
18:04: know like volue inputs you can have like you know some impulse inputs and have some outputs you know it could be values
18:10: as well as you know as well as um impulses and then like inside of the
18:15: node like you can do like you know whatever you want um you know maybe this goes here
18:21: maybe this goes here this goes here and this goes here and this goes here and here and then this goes here
18:29: or maybe like you know I don't know maybe this goes here and this goes here and once you define this you you
18:36: essentially this becomes its own node that you can then reuse so you get like a node that has
18:42: you know the same interface that you defined over
18:48: there and this is sort of like you know the internals of that node and then you can have instances of that node that you
18:54: can use in lots of different places um with this kind of like in a
19:01: mechanism um you'll be able to you know package a lot of common functionality you know into your own custom noes and
19:08: just use them in a lot of places without having to copy all of this you know multiple times which is going to help
19:15: you know with performance for prolex because um the system will not need to like you know compile essentially the
19:20: same code multiple times but it also help uh you know with the community
19:25: because you'll be able to build libraries of you know prot noes and just kind of distribute those
19:31: and let people use little of your custom notes so I think that's going to be particularly you know big uh feature for
19:39: Proto once this kind of lands it's something that's already um supported
19:45: internally by the protox VM but uh it's not integrated with FRS engine yet
19:51: there's other aspect to this as well uh because once we have support for custom nodes we can do lots of cool things
19:59: where this essentially becomes like a a function you know like a like an interface so you can have systems um you
20:07: can have systems like um for example the particle system that I'm actually working on and say you want to write you
20:14: know a module for the system uh the particle system could
20:19: Bindings that accept you know the essential accept any node that for example has like you know it has like a
20:28: three inputs say for example like you know position position uh say
20:36: lifetime just like how long the particles existed and say
20:43: Direction and then like you know we have output and the output is a new
20:49: position and then inside you can essentially do whatever math you
20:55: want and if your node if your custom node note follows this specific
21:00: interface like it has these specific inputs the specific output it becomes a thing you can just drop in as a module
21:07: into the particle system you know to drive you know the particle's position for example or its color you know or or
21:14: other properties and you'll be able to you know package behaviors and drop them into other non perlex functions and kind
21:21: of you know have like have essentially way to visually using the visual scripting Define completely new modules
21:29: um you know for the particle system but it it expands beyond that you'll be able
21:34: to do you know procedual textures like one one node that you might be able to do is one you know with interface where
21:41: you literally have two inputs or maybe just one input even uh say like you know the UV that's the UV coordin in texture
21:50: and then a color and then like you know inside like you do you do whatever and on the output
21:56: you have a color and if it follows this kind of interface
22:02: what this essentially does is you get um if you have a texture you know it's like
22:07: a square for each pixel your node gets the UV coordinate and it turns it into a
22:14: color so if you have if you want to make a procedural texture where each pixel
22:20: can be computed completely independent of all others all you need to do is Define this make sure you have like you
22:26: input you have a color output and this whole thing can become your own custom
22:31: procedural texture where you just decide based on the coordinate you're in you're
22:37: going to do whatever you want to compute pixel color and it's just going to compute it for you and with this it it
22:43: will also fit in a way um that like this can be done in a multi-threaded manner
22:49: because each pixel is independent so like the code is actually generating the texture can call this node you know in
22:56: parallel so uh this going to be like more complicated ones you know like you'll be able to do ones um uh you know
23:05: to do your own custom procedual meshes for example uh where uh def going be
23:10: probably a little bit more complicated um because you'll have to like you know kind of build the geometry but essentially the way Dead one might work
23:17: is you know you get an impulse and then like you do whatever logic you want to do build a mesh and like you're done and
23:23: now you have your you know procedural mesh component and you can just use it like any other
23:29: procedural component so I think once this kind of goes in this is going to be par powerful
23:35: mechanism um that a lot of like systems that even don't don't have much to do with perlex right now they will strongly
23:41: benefit from it um so this toy is going to be like you know really big feature
23:47: of like prot flux um the other one that I'm particular
23:53: looking forward to especially implementing it and playing with it is the DSP mechanism
23:59: and what that will let you do is make sort of workflows with the with the nodes you know to do stuff like
24:06: processing audio processing textures and processing meshes um with those you you know you'll
24:14: be able to do stuff like build your own you know uh like audio Studio or music
24:20: studio where you can you know do one like filters you know on audio it can be you can have signal generators and it
24:27: could pretty much use resonate you know to produce music or produce sound effects or you could use it to make you
24:33: know interactive uh audio visual experience where like there's like a lot of kind of you know realtime processing
24:39: to the audio and you can feed it like you know what's happening in the world and kind of you know change those
24:45: effects and that on itself will open up a lot of like new kind of you know workflows and options uh that are not
24:52: you know available right now or they kind of like they're a little bit there but you know not quite not enough for
24:58: people to like really even like realize it um so the DSP that's a big one um
25:05: same look know with the texture one like you'll be able to do procedural textures which on itself it's also like you know
25:11: really fun to play with uh but also you can now once we have those you'll be able to use resonite as a Production
25:17: Tool you know even like say like if you're building a game in unity or andreal you could use resonite as part
25:23: of your workflow to produce some of the materials for that game and gets a lot of the benefits you know of having it be
25:30: like you know a Social Sandbox platform because say you're working you know on a on a sound effect or using on working on
25:37: music or working in procedural texture you can invite people in and you can collaborate in real time that's given
25:45: you know thanks to Res ni architecture it's just automatic if you have like you know like your favorite setup you know
25:51: for like studio for working or something can just save it into your inventory send it to somebody or just load it load
25:56: it or we can publish it and let other people you know play video Studio setup
26:01: so the uh the DSP part is also I think going to be a big um sort of like a
26:08: doorway to like you know lots of new workflows and lots of new ways to like use resonite
26:14: so I'm really excited you know for it part and also like part of it is like I I just love you know audio visual stuff
26:21: like you you wire few notes you know and now you have like some cool visuals coming out of it or some cool audio you
26:27: know and you can mess with it there another part um for the um for the
26:34: mesh processing because you could for example have uh there could be like a
26:39: node where you input a mesh and on the output you get like a subsurface like
26:44: sub like sub subdivided smoothed out mesh or you do like you know maybe it viles it maybe it triangulates maybe it
26:51: applies you know Boolean filter or you know maybe there's some perturbation to the
26:56: surface and that feature uh I think will combine with yet another feature uh
27:03: that's on the road map which is uh like vertex based mesh editing because you'd essentially be able to do a thing where
27:10: you know say like you have like a simple mesh and like this is what you're
27:20: editing and then like this mesh you know this this live mesh I'm actually going to delete this one in the background
27:26: because uh there bit uh bad for the [Music]
27:33: contrast so I'm taking a little bit like for this question but this one this is one I'm particularly excited for so I
27:40: want to go a little bit like in depth on
27:45: this okay that should be better so you're editing this mesh and then we have like you know your own note setup
27:51: that's you know doing what of a processing and you it's making like a more complex shape out of it because
27:56: it's applying a bunch of stuff and you edit like one of the vertex and it just runs through the
28:02: pipeline uh you know your mesh DSP processing Pipeline and compus new output meshed based on this as an input
28:09: so like you move this vertex and this one you know maybe it does you know like this kind of thing you do this kind of
28:16: like modeling if you're like if you to blender this is what you do with the modifiers where like you know we can have simple based geometry and have like
28:23: you know subdiv surface and they like moving vertices around and it's updated the more complex mesh by processing with
28:31: modifiers um the mesh DSP combined with the vertx editing will allow for a very
28:37: similar workflow but one that I feel is even more more powerful and flexible and
28:44: also will probably a lot more performant because uh our you know processing pipeline is very asynchronous because
28:51: one of the things like when I like mess with blender one of the things that kind of bugs me is like you know if you use modifier it like takes a lot ofure
28:58: processing the whole interface essentially lags the way stuff is verite
29:03: is like you will not lag as a whole but only the thing that's updating will maybe take like you know say this takes
29:09: a second to update and I move the vertex I'll see the result in a second but I will not L entirely for a second so that
29:18: itself I think will you know combine really well with lots of you know upcoming features and also lots of
29:24: existing features and for me that's the it's uh that's just big you know big part
29:31: even just beyond prot flux it's how I like to design things this way where each system is very
29:39: general it does its you know own thing but also it has like lots of ways to interact with lots of other systems
29:46: because that way you get all these kind of emergent workflows that that become
29:51: like you know very powerful and you get lots of ways to combine those systems you know into like a single pipeline so
29:59: this should kind of cover it um I'm going to hop back
30:05: here I went I went a little deep like on this particular question um but
30:13: hopefully hopefully uh that kind know shed some idea on like some of the kind of future
30:18: things and things I want to do you know with um of it just prox but other
30:23: things um there we go
30:29: sorry I'm just settling back in um so I hope that answers the
30:35: question like in a good detail
30:40: um so next question we have
30:45: um uh troyborg is asking you said you had a side project you wanted to work on when
30:52: you get done with particle system before starting audio system reor are you able to talk about it yes so the thing I was
30:59: kind of thinking about is um um um essentially I've been playing a
31:06: lot with gassian splatting recently and I can actually I can show you you know some of the
31:11: videos um let me bring some of my
31:17: Splats so right now I only the only way I can actually show you these is you know through a video so this is um this
31:24: is one I did uh very recently so this is is probably one of the best
31:30: SCS um you can see if I if I play this
31:35: I'm like I almost asked you like this this loaded for you but like it don't need to be loaded for me but if you look
31:41: at this this is a scan of a fuit head of like a friend who's here like in chck Republic um his name is amju um he let
31:49: me scan like his first suit head and I first reconstructed with the traditional technique but then I started playing
31:54: with gting split software I threw the same data set at it and like the result is incredible like if you look at the
32:01: details of the fur like it the technique is capable you know of capturing the you
32:07: know the softness of it um you know like it it just
32:13: like it just looks real like that's that's that's that's the easiest way to describe it is like it gives you
32:20: incredible amount of detail while still being able to Rend this at like you know
32:25: interactive frame rates and I really like like I've been like 3D scanning for
32:30: like years like you know I love like 3D scanning stuff and making like models of things and this technique it offers a
32:37: way to reconstruct you know things um I can actually hold on uh let me also
32:42: bring one more just um I can show you
32:48: how uh how the result of this looks with traditional photogrametry so if I bring this you see this is this
32:57: is traditional mesh and it's still it's a pretty good result like I was like really happy with this but like if you
33:03: look you know there's like no softness in the hair it you know there's like there's like artifacts Around the Fur it
33:09: gets kind of it gets kind of like you know blob it's like it loses its softness uh that the cassin splits are
33:16: able to preserve this is another kind of example um I took these pH photos for this like
33:23: in 2016 like it's like I think for like 8 years ago now um and it's just like
33:30: also like if you just look at the part it it just looks real like I'm I'm
33:35: really like impressed like with technique I've been like kind of having a lot of fun with it um and I've been kind of like you know
33:42: like on my off time I've been um I've been kind of like um I've been like
33:50: looking for ways um I kind of like looking to how it like
33:55: works and if I to C SPL like War It's relatively simple in principle like there it's it's like an extension of
34:02: Point Cloud where instead of like you know just tiny Points each each of the points can be like you know a colorful
34:08: blob it's like a has like fuzzy edges to it and they can have different sizes some of them you know you can actually
34:14: see some of the individual spls you know some of them are like long and stretched some of them can be round you know some
34:19: are small some are big uh and they can also change color based on which direction you're looking at them from um
34:28: so I've been essentially looking for a way you know can we integrate gasing spotting rendering into resonite and I'm
34:35: fairly confident like I'll I'll be able to do it at this point like I understand it like well enough to make an
34:42: implementation uh the only problem is you know I don't really have time to actually commit to it right now because I've been focusing on finishing the
34:48: particle system but um the thing I wanted to do is you know after I'm done
34:54: with a particle system mix in like you know a smaller project that's more personal and fun just sort of like you
35:00: know like a mental health break pretty much it's something you know just kind of do
35:06: this primarily for myself um because I want to like you know bring those skins
35:11: in and you know showcase them to people um I'm still not 100% decided I'll kind
35:17: of see how things go but I mean kind of like you know itching to do this and doing aot the B of research you know
35:22: like in in like you know like on the weekends and so on to get this going so
35:28: it's something um something I like to do I think also something that like a lot of
35:34: people would appreciate as well because I know there's other people in the community uh who were you know playing
35:39: with Splats and they wanted to bring them in and I think would also make like you know resonate kind of interesting to
35:46: a lot of people who might not even think about it now because um it's essentially going to give you ability to you know
35:51: visualize the gas in splits in a collaborative like you know stbu environment so it might even open up
35:56: like some new doors but I'm not 100% decided but pretty much this is kind of
36:02: what I've been kind of thinking about um next uh n 64 is asking H are
36:10: there plans to add instant cut options for cameras do current fly from one place to another seeking looks a bit weird video over longing distances uh
36:17: you can already do this so there's an option uh I just have it a default which
36:23: does have the um which does you know have the fly
36:29: but there's literally a check boox in my UI inter ployed between anchors if I uncheck that and I click on another like
36:36: you know this is instant and I will click over there if I can read no there's a c way
36:42: um just going to do this uh I click on it you know I'm instantly here so that
36:47: feature already exist uh if it um if it kind of helps I can just you know keep this one on so it doesn't do the weird
36:54: fly through but yes I hope I would answers the
37:00: question uh next uh wicker di uh what would you like to be 5 years is there
37:06: specific goal or Vision so for me like um the general idea of resonite
37:15: is it's kind of hard to like put in words sometimes because um in a way that like would be a good way to communicate
37:22: but it's almost like it's like a layer um
37:29: it's like a layer where you have certain guarantees you guarantee that everything
37:35: is Real Time synced everything is a real time colaborative everything is a real time you know
37:41: editable um you you have Integrations with different Hardware you have you
37:46: know persistence you can save anything you know whether it's like locally or from like you know Cloud but everything
37:51: can be persisted um and what I really want
37:57: resonate to be is like you know be this layer for lots of different workflows
38:03: and for lots of different applications and the most clear one is you know social VR where just you know you're
38:09: hanging out with friends you're watching videos together um you know you're playing games together you're just you
38:16: know chatting or like you know um doing whatever you want to do but if you think
38:21: about it like you know all of that is possible thanks to this you know Baseline layer but there's also other
38:27: things can do which also benefit from the social effect and it kind of ties into what what I've been talking about
38:33: earlier uh which has to do vide using night you know as a work tool as part of your pipeline because um if you want to
38:41: be you know working on music if you want to be more you know making art if you want to be like you know doing some
38:46: designing and planning you still benefit you know from all these aspects of the software uh you
38:53: know being able to collaborate in real time being able to like you know like if if I'm working on something and showing
38:58: something you immediately see you know the results of it and you can like you know mess with it like you know modify
39:04: it and can build your own applications on it like people given there's a nice nature you
39:10: can build your own tools and then you know share your toools like with other people as well if you want to so for me
39:18: what I would really want reson it to be is sort of like a foundation for lots of different applications that goes just
39:24: you know beyond just social VR but which enriches pretty much what whatever task
39:32: you want to imagine um you know with with that social VR with the realtime
39:38: collaboration and persistence and networking you know kind of aspect think of it as you know something like Unity
39:45: or your own andreal because those engines or c i shouldn't forget that one
39:50: um these engines they're they're been maybe primarily know designed for
39:56: building games but people do lots of different stuff with them you know they build like you know scientific
40:01: visualization applications you know medical training applications like you you some people build actual just you
40:07: know utilities with them um and they're very general tools which solve some
40:15: problems for you so you don't have to worry about you know lowlevel Graphics programming in a lot of cases you don't have to worry about you know having
40:22: basic kind of functional engine uh you kind of you know you kind of get those like you know for free in
40:28: quotes you know like in in a sense you don't need to spend time on them that's already provided for you and you can
40:33: focus more on what your actual application is whether it's a game whether it's a tool whether it's you
40:39: know research application um whatever you want to build I want to do the same
40:44: but go a level further where instead of like you know just providing the engine you get all the things I mentioned
40:51: earlier you get real time collaboration whatever you build it supports real time collaboration it supports persistence
40:58: you can save it it's you already have Integrations you know with lots of different Hardware you know like
41:04: interactions like grabbing things you know like that's that's just given you don't have to worry about that you can
41:09: build your applications around that um so I wonder as it to be sort of like you
41:15: know um like almost the next level you know beyond game
41:24: engines um and another kind of analogy I use for this is is if you look at early
41:29: Computing um comp like you know when when we when computers were like you know big and room scaled the way they
41:36: had to be programmed is you know with like you know punch card for example I don't know if that was like the very first method but it's one of the
41:42: earliest and it's very difficult because you have to like you know you have to write your program you know and you have
41:47: to translate it in individual like you know numbers on the punch card and then you know later on there came assembly
41:54: programming languages and that those made it easier they let you do that let
41:59: you do more in less time but it was still like you have to you know think about managing your memory managing your
42:05: stack um you need to decompose you know complex like task into these like you
42:10: know primitive instructions and it still takes a lot of mental effort and then later on came you know a higher level
42:17: programming languages you know like um I'm kind of skipping all but like you know say C C++ you know Java C Anda
42:25: languages like Python and they like added further abstractions where you know for example it's uh you know modern
42:32: C and C++ you don't have to like worry about memory management as much at least not you know managing your stack and now
42:38: like you know now now some of the things you have to worry about they're automatically managed like you you just you don't even
42:45: have to think about them you can just focus my function accepts these values you know outputs this value and it like
42:51: you know generates the appropriate you know stack management code and for you um
42:57: and then you know came like tools built with those languages you know like like like I mentioned Unity or under where you don't have to worry about Oro where
43:05: you don't have to worry about you know having the game engine like you know being able to render stuff on screen
43:10: that's already provided with you and with resonite the goal is to essentially move
43:17: even further along you know this kind of progression to make it very you don't have to worry about the networking
43:23: aspect the persistence aspect no Integrations with Hardware you're just going like you know given that and you
43:28: can focus more of your time on what you actually want to build in that kind of environment so that's pretty much you
43:36: know that's the that's the big kind of like Vision I have like on my end for what I want the resite to
43:42: be uh epic Eon is asking uh what are your thoughts on putting arrows on
43:49: generic uh type wires um I'm not actually sure if I
43:55: fully understand that one um I don't know what mean like generic type wires
44:01: like do mean like uh wires that are of the type type I probably need like a
44:07: clarification for this one um yeah s i can I don't know like how
44:14: how to interpret the particle question so I'll uh oh he's asking arrows and wires I'm some them kind of have
44:23: arrows I mean I think so like a the impulse ones actually have
44:28: arrows I'm not really sure like I probably need to like see an image or something uh next Zid just said is
44:36: asking select boxes of code that take inputs and give outputs allowing for coding interface with flug without
44:41: having to build some parts of the function using the nodes yes yeah pretty much like you you can your protl node
44:49: becomes a function that you can then then other systems can call without even
44:54: like without even needing to know it's prot profile like they're just like you know they're just like I'm going to give you these values and I expect this value
45:01: you know as the output and if you're not you know much as the pattern then you can use it you know give it to those
45:07: other systems uh next question Tor that's that
45:13: does sound amazing is that something for after sauce custom perlex noes so um
45:19: it's not related to Sauce that's fully for extension side so technically it doesn't matter um what Happ before or
45:27: sauce it's like it's not dependent on it in any way um there is part it is which is
45:34: having custom Shader support which you do want to do with perlex that one does
45:39: require you know switch to Source because with unity the options to do custom shaders are very limited um and
45:48: very kind of hacky uh so that one will probably wait but for for the one the parts I was
45:54: talking about earlier those will have you know regardless of when SAU comes in
45:59: like it it might happen after s comes in it might happen before it comes in but it's just purely you know how the timing
46:06: ends up working out and how the prioritization ends up working out uh next question Shadow X I'm just checking
46:13: time uh Shadow X with Nesta nodes will custom no be able to AO update when search template for script node is
46:19: changed yes they will um there's like multiple way interpreters as well but if
46:24: you have a template and you have it like used in lots of places if you change the
46:29: internals of the Noe every single instance is going to be reflected so you can actually you know have it like used
46:35: in lots of objects in the scene and um and you know and like you need to change
46:41: something about its internals everything is going to be reflected in the scene the other interpretation is you
46:47: know if you make a library of notes uh and say you know like you referen that
46:53: like you know in your world and the author of the library publishes an update version is that going to also update you know other worlds like you
47:01: know which which do use that Library uh that one that would be handled by the
47:07: molecule system which is you know our plan system for sort of like versioning
47:12: and we want to use it not just for resonate itself but also for prot flag
47:17: so you can you know publish you know your library functions and so on and with that what we will do is let you
47:23: define you know rules on when to a to update and what not um we probably follow something you know
47:29: like seating versioning so like if it's like you know minor update it auto updates unless you for like you know
47:34: unless to disable that as well if it's a major update it's not going to aut to update unless you specifically ask it to
47:41: um so that's going to be like you know just the other part of it that one's definitely going to like um this one
47:48: going to give you more of a choice uh next question Troy Borg so
47:53: could we have something like bubble note that has all the code for them floating around randomly uh the random Lifetime on it um
48:01: I'm not F sure what you mean bubble note um but yeah pretty much you can package you know all the code you know for
48:06: whatever the bubble whatever you want the bubble to do is in that Noe and it doesn't need to be like if you me for
48:12: example my bubbles like oh that didn't
48:19: work there we go like this one you see like I have this bubble and this bubble
48:25: it has like you know code on it that handles you know it handles it flying around uh and when I right now when I
48:33: make the bubble it actually duplicates all the code with a bubble which means the protox VM needs to compile all this
48:40: code it's relatively simple so is not as much but still it adds up especially you have like you know if you had like
48:45: hundreds of these or thousands um with a nested nose all this
48:51: bubble will need to do is you know just reference that template and and only need one of it which means it's just
48:57: going to reuse the same you know compiled like instance of the particle node instead of duplicating literally
49:03: the entire thing on each of the each of the you know objects you make
49:09: independently so next uh question from Tober
49:15: like uh like for processing textures you could do stuff like level curves adjustments like in blender classroom
49:22: the texture for albo they just uh they adjusted with levels and gray scale and plug into hide map instead of separate
49:28: texture yeah like you you could um I mean I can fly understand like how to
49:35: map this question toite but uh because I'm not familiar with blender classroom but you could you know Define
49:43: your own procedural texture and then you know use it under stuff that procedural texture it it will end up as actual bit
49:49: mop it's going to you know it's going to run your code to generate the texture data upload it to the GPU and at a point
49:55: it's just the normal texture uh but they able to do stuff
50:01: like that or at least like you know similar uh next question Dusty sprinkles I'm also just checking how many it is
50:07: fair bit of questions uh uh Dusty sprinkles is asking when we get custom
50:14: nodes do you think we'll be able to create our own default node UI uh I could see using audio DSP for custom
50:19: nodes to make do um so the custom UI for the protx NOS
50:26: that's completely independent from Custom nodes um like it's it's something
50:32: like you know we could also offer um but it's pretty much like a completely separate feature um because the the
50:38: generation of the node is you know it's technically outside with the main Port Flex itself it's sort of like the UI to
50:44: work like interface with a prolex uh we could add mechanisms you know to be able
50:49: to do custom noes uh there are some parts of that like I'm a little bit careful with because usually you have
50:56: you can have you know hundreds or thousands of the nodes and having that like customizable or having customizable
51:02: system that can end up like you know a performance concern because the customization it can depending on how
51:08: it's done it can add a certain amount of overhead uh but it's not something you
51:14: know we completely close to it's just like we're probably going to approach it more carefully um it's it's not going to come
51:20: as part of custom notes though like those are independent
51:27: uh blue oh s unfortunately late but I don't know if I can bring on it easier
51:34: because I already have the setup uh without I'm sorry
51:40: um uh J widen is asking internally how prepared is the engine to take full
51:45: advantage of uh modern net past the jit uh there's been lots of things since framework like spans to avoid allock and
51:51: unsafe methods think bit casting that can make things way way way faster are areas where we use the new features in
51:58: the Headless client through pre-processor directives or something so um we use like a number of features uh
52:05: that are sort of backported to all the versions like you know like you mentioned SPS um using you know stle locations and
52:12: so on and we expect those you know to get like a performance uplift uh with more than run time because those are
52:18: specifically optimized for it um so there's like parts of the engine like especially anything
52:24: newer like uh we've been trying to like you know use the more modern mechanisms
52:29: were possible uh there are bits where we cannot really use the mechanisms or if
52:35: we did it would actually be detrimental right now um there's like certain things
52:40: like uh like for example uh vector's Library um that one with with modern net
52:47: it runs way faster but if we use it right now we would actually run way slower because uh with the with uh with
52:56: unit is mono it it just it doesn't run well like there's there's certain things which um
53:03: if we did right now like we essentially end up hurting performance until we make the switch so we tend to like use a
53:08: different approach that's not as optimal for the more modern net but it's more optimal now um so there might be like
53:15: you know some places like like in COD you can see like that but where possible we try to use the modern mechanisms
53:21: there's also some things which we cannot use just because they don't exist you know with the version of net and there's
53:26: like know way to backp like for example using you know SMI the intrinsics you know to accelerate lot of the math um
53:34: that's that's just not supported on all the versions and there's no way you know to like backport it so we cannot really
53:39: use those mechanisms um so once we do make the switch uh what we expect like pretty
53:47: substantial performance Improvement uh but part of like why we want to do the switch especially as one of the first
53:54: task you know towards like uh performance is because it'll let us use all the
53:59: mechanisms you know going forward so when we build new systems we can you
54:05: know take full advantage of you know all the mechanisms that modern net offers
54:10: and we can over time also like you know upgrade the old ones to get even more performance out of it um so like overall
54:16: I kind of expect you know big performance uh even for the parts that are not like most prepared for it just
54:21: because the code generation quality is way higher uh um but
54:29: um it's um like like we can essentially like you
54:36: know like take them time like you know to like get even more performance by switching some more approaches the more
54:43: like more performant ones so by making the switch then on itself it's going to
54:48: you know B big performance boost but that's not the end of it it also you know opens doors for doing even more
54:54: like following that uh next question Z just just Z can those
55:00: splots work within the engine they are in meeses right could they be rigged uh so Splats you can have to implement the
55:06: support for it yourself uh the output is in a mesh like the idea of gasan
55:12: splatting is you're essentially using a different type of primitive you know to represent your you
55:19: know I was I don't want to say mesh because you know it's it's not a mesh um
55:24: entes your mod you know your scene or whatever you know you're want to show um
55:30: so instead of a triangle you know vertices and triangles that we have with traditional geometry it's completely
55:35: composed from Gans which are a different type of primitive um you can actually
55:42: use meshes to render those um from what I looked there's like multiple ways you
55:47: to do the rendering um one of the um like one of the um approaches for
55:57: it is like you know you essentially encode like the Splat data and then you just render this you use the typical
56:03: like you know GP rization pipeline uh to render them as quads you know and then the Shader does like the sampling of
56:10: like spherical harmonics for like uh you know so can change color based on the like angle you look at it from and other
56:16: stuff there's other approaches that um implement the actual rization in compute
56:22: Shader uh and that can like lead to like more efficient ones and at that point like you know you're not using traditional geometry uh but you know the
56:29: approach is kind of vary there's like lot of way still like Implement them next question troyborg uh so is
56:36: that something you need special software to create him or is it just something in reso so it knows how to render that new
56:41: format for 3 scan um we essentially need like a code to like you know render them
56:46: the main like the gist of it is um the way you approach it is you get your your
56:52: data set you know your gas and splat it's essentially a bunch of like is essentially points with lots of external
56:58: data you have like you know the size of it uh and then the color is en encoded using something called spherical
57:05: harmonics and that's essentially like a mathematical way to like efficiently encode
57:11: information uh on the surface of a sphere which means like you can kind of sample it based on the direction like
57:17: where if if you consider you know um if you consider a sphere I should have
57:22: grabbed my brush I I I'm going to grab a new one because I'm too LA to go over
57:28: there um brushes uh let's see I'm not going to
57:35: like over there because it's just a simple doodle but um say you have like you know this is sphere in you know 2D
57:43: it's a circle um and if it's like unit sphere then each point on the sphere is essentially a
57:49: Direction so if we have information encoded you know on the surface of your sphere then each point like if if I'm
57:56: like you know if I'm the Observer here so this is you know my eye and I'm looking at this then the direction is
58:03: essentially the point from the center of the sphere towards the Observer I use this direction to sample the function
58:10: and I get a unique color for this particle direction if I look at it from this direction you know
58:18: then then I get this direction and I sample the color here and this color can be different than this color and this is
58:25: a way that the Gin plus they're really good at encoding stuff like you know Reflections for example because with
58:31: reflection you literally the color on the point like on the point it literally changes based on the angle you look at
58:38: it from um so and it's the C it's the uh sorry
58:46: it's the spherical harmonics that actually take the bulk of the data for gasan splot because uh um from what I've
58:53: seen they use third order Spa harmonics which means for each point you actually have 16 colors which is quite a lot uh
59:00: and a lot of the work is like you know how do you compress that in a way that
59:06: the GPU can decode like very fast on the Fly you know so it doesn't eat all your
59:12: vrm uh but yeah essentially you've read your code you know to answer the question more rally you've read your
59:17: code you know to like encode it properly and then like you know render it as efficiently as you can um and you can kind of utilize you
59:25: know some of the ex thing like rization pipeline as well to kind of save you some time um Z zit is asking I don't
59:32: have a good understanding of splots but aren't they essentially particles um so I kind of like maneuver
59:37: this a few times like there're not particles in the sense of particle system there's like some overlap because each each split is is a point but it it
59:46: has a lot of the additional data to it and it's also not you know a tiny small particle but like it can be like varus
59:51: sized you know color blob uh next question does the sprinkles so
59:58: can you make spots with any 3D scans I don't really get them but they're neat uh so the data set I use for mine uh
1:00:05: it's essentially just photos it's the same approach I use for traditional um
1:00:12: um uh like um um let me
1:00:19: think like there's there's like different ways to like make them but the
1:00:25: most common one that seen is like you know you L just take lots of photos you use traditional approach you know the
1:00:30: photos get aligned in a space um and like then then you sort of like you know
1:00:37: estimate like the depth ex like the traditional you know 3D construction except for a Splats it doesn't estimate
1:00:43: the depth the way I've seen it done in the software I use is it starts with like a a sparse Point Cloud that's made
1:00:50: from the tie points from the photos essentially like points in space that are like shared between the photos and
1:00:57: it generates you know Splats from those uh and the way it kind of does it is uh
1:01:02: it um I believe it uses like gradient descent which is a form of like machine learning where each of the Splats is
1:01:09: actually taught how how it should look so it matches your input images um so
1:01:15: that's usually the longest part of your construction process because it has to like go through a lot of like know training and if you use the software I
1:01:22: use it's called jaet post post shot um um it
1:01:29: um um it essentially like you know it runs like usually like several dozen
1:01:36: thousands like training steps and you can usually in the begin you can kind of see like the Splats are very fuzzy and
1:01:42: it's just kind of moving around and they're sort of like settling into space and getting more detail and it also like
1:01:48: adds more Splats you know in between them where it needs to add more detail so there like a there's like you know whole kind of
1:01:55: training process it I do actually have a video I can show you um because there's also like relevant question I can see
1:02:03: um so I'm going to uh uh because Shadow X is asking does
1:02:10: all common uh common spling sofware encode spherical harmonics I never noticed color changes in my scans in a
1:02:16: scaners and pulse shot so I know for sure pulse shot does it I don't know about skuniverse because I don't use
1:02:21: that one it's possible they simplify it because like I've seen some implementations of G in Splats they just
1:02:27: throw away the spherical harmonics and C a single color which saves ton of space but you also lose you know one of the
1:02:33: big benefits of them um but I can tell you uh posture
1:02:40: definitely does it and I have a video that showcases that pretty well um so
1:02:47: this is uh this is kind of figment that I did for f and I like reprocessed this
1:02:53: with gasi spting and watch watch like you know the reflections on you know on the surface of the statue
1:03:00: you can see how they change based on where I'm looking from it's kind of subtle uh if I look at the top there
1:03:08: actually another interesting thing and I have another video for this uh but gassian splatting is very
1:03:14: useful if you have like good coverage from like opposite angles because the way the scanning process works they like
1:03:21: like I mentioned earlier they they are trained to reproduce your input IM as
1:03:26: close as possible which means for all the areas all the you know area where you have photo coverage from they look
1:03:33: generally great but if you like move too far away from them like in this case for example from the top I was not able to
1:03:40: take any pictures you know from the top of it it actually it actually kind of like they start going a bit funky do you
1:03:47: see like do you see like how it's kind of like all kind of fuzzy and the color is blotchy um
1:03:55: and it's the that's um that's kind of like for one it shows you know like it
1:04:02: does encode you know the color based on the direction uh but it also shows like one of the downsides of it uh because I
1:04:08: have another video here um so this is a scan of um this is like um I don't even
1:04:16: know how long ago this was like over six years ago um uh when um uh like my family from
1:04:26: Ukraine they were like visiting over because my grandma she was um Ukrainian uh and they made
1:04:32: borch uh which is like know kind of tradition like foods and I I wanted to scan it but I Lally didn't have time
1:04:39: because they put on desk I was like I'm going to scan it and I was only able to take three photos before they started
1:04:44: moving things around uh but it actually made for an interesting scan because I was like how much can I get out of three
1:04:51: photos and in the first part this is traditional scan with a mesh surface
1:04:57: that's done with you know with ag of meta shape oh I switch to the other one so you see all the reflections
1:05:03: they're kind of like you know baked in it doesn't actually look like metal anymore uh there's you know lots of
1:05:09: parts missing because literally they were not scanned but the software was
1:05:15: able to estimate the surface it knows this is a straight surface if I look at it from the angle apart from the missing
1:05:22: parts it's still coherent it still holds shape with Gan sping it doesn't
1:05:29: necessarily reconstruct the actual shape it's just trying to look correct from angles and you'll be able to see that in
1:05:35: a moment uh so this is the gasin split and you see it looks correct and the moment I move it it just
1:05:42: disintegrates like you see it's just a jumble of you know colorful points and it's because all the views that I had
1:05:49: they're like relatively close to each other and for those views it looks correct because it was strength to look
1:05:55: correct but because there's no cameras you know from the other angles the cians are free to do you know just whatever
1:06:02: like they're they're they don't have anything to constrain them you know to look particle way so it just ends up a
1:06:07: jumble and that's a very kind of to me that's very kind of interesting way to
1:06:13: visualize the differences between the scanning techniques but yeah just kind of along
1:06:19: with it like answer to yes they do encode the spherical harmonics and you can make it like you
1:06:24: know pretty much with n SC but the quality of the scan is going to depend
1:06:29: you know on your data set and I've been kind of throwing because I have like terabytes of like you know 3D scans that
1:06:35: be just throwing everything you know at the software and seeing what it like ends up producing I know there's also other ways
1:06:42: there's like you know some software and Alo just double checking time uh there's also you know some software um that um
1:06:51: just generates it like you know with AI and stuff but like I don't know super much about that so there's like other
1:06:57: ways to do them um but I'm mostly familiar with the one you know with um
1:07:02: I'm mostly familiar with um you know like using photos as the data
1:07:08: set uh next question uh J ven 4 prolex is a VM it compiles things um so
1:07:15: technically VM and compiling those are like two separate things also Epicon is asking what is a perlex VN so I'm going
1:07:21: to just combine those questions into one um
1:07:27: so yes um it it is a VM which essentially means it's sort of like you know it has a defined sort of like run
1:07:33: time like it's it's a technically stack based VM um it's
1:07:39: a um how do I put it it's essentially sort like in an environment where the code of the nodes you know it knows it
1:07:46: knows how to work with the particle environment um that sort of isolates it
1:07:51: you know from everything else it's sort of like you know like a runtime that's able to run the code it sort of compiles
1:07:57: thing it's sort of like a halfway step it doesn't it doesn't directly produce
1:08:03: machine code from actual code the actual code you know of the individual nodes that ends up being
1:08:09: machine node for the node Itself by the way it can operat with the VM um what
1:08:15: prolex does it builds uh something called execution list and evaluation lists so if you have like you know a
1:08:21: sequence of notes or sequence of impulses it's going to look at it and be like okay this executes then this
1:08:26: executes then this executes and builds a list and then during execution it already has the pre-built list as well
1:08:33: as like it resolves things like you know stack allocation it's like okay this node needs to use this variable and this
1:08:38: node needs to use this variable I'm going to allocate this space on the stack and you know and I'm going to give
1:08:43: these notes the uh the corresponding offsets so they can you know read the proper values to and from the stack um
1:08:52: so it's kind of like you know do a lot of kind of like the building process it doesn't end up as full machine code so like I would say it's sort of like a
1:08:58: halfway step towards compilation um eventually we might consider like you know doing sort of a
1:09:04: jit compilation where it actually makes you know full machine code for the whole
1:09:10: thing which could help like improve the performance of it as well um but right now it's U it is a
1:09:16: VM that um we sort of halfway compilation step to kind of speed things
1:09:21: up uh it also like does it to like you know validate certain things like for example you have like you know infinite
1:09:27: kind of continuation Loops like certain things are essentially like illegal like
1:09:33: you cannot have those be a valid program uh which kind of helps avoid you know
1:09:39: some everyone's like you know having some kind of issues where we have to figure out certain problems at the r
1:09:46: time but in short like the pro VM it's like a way for you know the prolex to
1:09:52: essentially do its job it's like an environment execution environment um you know that defines how
1:09:59: it look kind of works and then all the nodes can operate within that environment uh next question Nitra is
1:10:06: asking is the current plan to move the graphical client to net n via multiprocess architecture be of the
1:10:12: sauce yes um so we are currently um I'm going actually just do I've done it on
1:10:18: the first one but since I have a little bit better setup I might do it again just to get like you know better view um
1:10:24: let me actually get up for this one uh I'm going to move over here there we go so I'm going to move
1:10:32: over here I already have my brush that I forgot earlier I'm going to clean up um all this
1:10:39: stuff um let's move this to give you like a gist of like the performance
1:10:46: update there we go clean all this up um make sure I'm not going to hit a
1:10:52: wall grab my brush there we go so right now a little bit
1:10:59: more right now um so the way like the situation is
1:11:07: right now oops so imagine this is Unity I'm
1:11:16: actually going to write it here unity and this is uh within the unity we
1:11:24: have fr so this is FRS
1:11:31: engine I'm just going to f
1:11:37: and um so unit you know it has its own stuff just you know like whatever unit
1:11:43: it has and then with FRS engine most things in FRS engine they're actually
1:11:49: fully contained you know within FRS engine so there's like lots of systems just going thr them as little boxes and
1:11:55: they kind of like fully contained Unity has no idea they even exist uh but then there's right now
1:12:03: there's two systems uh which are sort of shared between the two um there's a
1:12:10: particle system and there's the audio
1:12:22: system so those two systems uh they're essentially a hybrid where F
1:12:28: engine doesn't work and unity doesn't work and they're very kind of inter
1:12:34: there's another part when for engine communicates with unity there's you know another bits
1:12:41: there's also like lots of like you know little kind of connections between things that kind of you know tie the two
1:12:49: together um and the problem is Unity uses something called mon
1:12:55: which is a runtime it's also actually like a VM you know like the prolex VM but different but essentially it's
1:13:02: responsible for taking our code and running it you know translating into instructions for your CPU providing you
1:13:09: know allo um like kind of Base Library like you know um implementations and so
1:13:15: on and the problem is the version that you need to use this it's very old and
1:13:20: it's very slow and because like all of the for engion is kind of running inside of it um that makes it uh you know that
1:13:30: makes all of this like slow as well so what a plan is in order you know
1:13:36: to get a big big performance update is uh first we need to like simplify you
1:13:42: know we need to disentangle the few bits of for extension from Unity as much as
1:13:48: possible uh the part I've been working on you know um is the particle system uh
1:13:53: that one's very close I think we'll probably start it like probably start this thing next week uh it's called
1:14:00: Photon dust that's our new in-house particle system and the reason we're
1:14:05: doing it is so we can actually you know take this whole bit oh oh
1:14:11: no I might I might just redraw it I wanted to make a nice visual part but uh
1:14:17: it's not cooperating just going to do
1:14:22: this and then I'll do this and just you know particle system audio system so
1:14:27: what we do we essentially replace this with this we make it fully contained
1:14:33: inside of FR engine uh once that is done we're going to do the same for audio engine so it's going to be also fully
1:14:39: contained here which means we don't have you know ties here and then this part
1:14:45: instead of like you know lots of little kind of like wires we're going to rework this so all the
1:14:52: communication uh with unity happens via like a very nicely defined sort of
1:14:58: package where it like you know sends the data and like then the system you know
1:15:03: it'll do like you know whatever here but the tie to Unity is now you know greatly
1:15:09: simplified it essentially sends all the stuff you know that needs to be rendered you know and some stuff that kind needs
1:15:15: to come back is sent over a very well defined interface that can be um
1:15:20: communicated over you know some kind of like interprocess communication mechanism probably combination of like
1:15:27: um uh a shared memory and some like you know pipe
1:15:32: mechanism once this is done what we will be able to do we could actually take FRS
1:15:37: engine and take this whole thing out if I kind of grab the whole thing it's being unwieldy uh just just pretend this
1:15:45: is smoother than it is they'll take it out into its own process and because we
1:15:53: now control that process instead of you know having being AB with unity we can use net
1:16:02: n and this part like this is majority of like you know where time is spent
1:16:09: running except you know when it comes to rendering which is Unity part which means because we'll be able to run with
1:16:15: net 9 um we'll get a huge performance boost
1:16:21: and the way we know we're going to get like you know significant performance boost is because we've already done this
1:16:26: for a headless client that was the first part you know of this performance work is move the Headless client to use net 8
1:16:34: which now is net 9 because they released a new version um the reason we wanted to do
1:16:40: headless first is because headless already exists outside of unity it's not tied to it so it was much easier to do
1:16:47: this for headless you know than doing this for the graphical client and headless it pretty much shares most of
1:16:53: this most of the code that's like you know doing the heavy processing is in the Headless same as you know on the
1:16:59: graphical client uh when we made the switch and we had Community start
1:17:04: hosting events with the net 8 headlight we noticed a huge performance boosts
1:17:09: there's been like sessions like for example the uh Grand Oasis karoke um I remember like they they used
1:17:17: to their headless used to struggle when it was getting around you know 25 people on the FPS of the Headless would be
1:17:24: dropping you know the session will be degrading with the net 8 they've being
1:17:29: able to host session which had I think at a peak like 44 users and all the users all the ik you
1:17:36: know things that like all the dynamic bones all the you know perto flux everything you know that's being
1:17:41: computed on graphical client it was being computed on headless minus obviously you know rendering stuff and
1:17:49: the Headless was able to maintain 60 frames per second with 44 users
1:17:55: which um you know that's at Le like an order of Mag magnitude kind of
1:18:00: improvement over you know running with mono um so doing it for headless first
1:18:08: that sort of let us you know gauge how much of a performance Improvement will
1:18:13: the switch make and whether it's worth it you know do the separation as early as possible and based on the data like
1:18:22: it's pretty much like you know a feel like it's very very worth it and this why we've been kind of you know focusing
1:18:29: on making this happen making you know this like you know do this kind of thing where you can pull F engine out of unity
1:18:35: around evident n and then the communication you know will just do instead of like you know the
1:18:40: communication happening within the process it's going to pretty much happen the same way except across you know
1:18:47: process boundary the other benefit of this uh is
1:18:55: you know how do we align this you know because we still even when we do this once we reach this point we'll still
1:19:00: want to get rid of unity for a number of reasons um one of those is you know being like custom shaders we those are
1:19:08: really really difficult to do with unity at least you know making them like real time and making them you know support
1:19:14: like backwards compatibility making sure the content doesn't break stuff like that um being able to use more efficient
1:19:20: rendering methods like instead of you know having to rely on deferred um will be able to like you know use
1:19:27: like cluster forward which can handle you know lots of different shaders with lots of
1:19:33: lights so we'll want to get rid of unity as well and this whole thing where the
1:19:39: communication between FRS engine which does you know all the kind of computations and then send stuff be like
1:19:45: please render this stuff for me um because this process makes this a lot more defined we can essentially take the
1:19:53: whole Unity just going to e it
1:19:59: away and then we'll plug in
1:20:05: Source instead so s is going to have like you know its own things and inside like
1:20:11: sauce is actually going to be like you know right now it's being built on the Bevy like rending
1:20:16: engine so I'm just going to put it there and the communication is going to happen pretty much the same way you know and
1:20:23: this is going to do whatever so we can we can you know snip Unity out and replace it with sauce there's
1:20:30: probably going to be some minor modifications to this how it kind of communicates so we can kind of build around the new features of sauce and so
1:20:37: on but the principle of it by moving for exension out by making everything neatly
1:20:43: comp combined making a neat you know communication method it makes the switch to Source much easier as well as the
1:20:49: next step um it's actually the latest thing from development of of sauce there
1:20:55: was actually a decision made that uh sauce is probably not going to have any c Parts at all uh it's going to be
1:21:01: purely rest based which means like it doesn't even need to um it doesn't need to like you know worry about net 9 or
1:21:09: like you know C interrup because uh its responsibility is going to be you know rendering whatever for engine sends it
1:21:15: and then maybe know sending some like call like methods kind of back like where needs to be by directional
1:21:21: communication to like you know syn stuff up um but like you know actual world
1:21:26: like the word model you know all the kind of like all the like you know interaction that's going to be fully
1:21:31: contained in F engine external to Source um then on itself that's going to be a
1:21:36: big upgrade uh because it's going to be much more more ring engine will be able to do you know things like the custom
1:21:43: shaders like was mentioning there's some potential benefits to this as well because it um
1:21:48: the multiprocess architecture uh it's inspired by you know Chrome and Firefox
1:21:53: which do the same thing uh where your web browser is actually running you know multiple
1:22:00: processes um one of the benefits that adds is you know sandboxing because um
1:22:06: once this is kind of done we'll probably do the big move like this and at some point later in the future will split
1:22:12: this even into more processes so each of the worlds you host you know can be its own
1:22:17: process also net 9 you know or whatever the net version is so this going to be like you know one world is going to be
1:22:24: another world world and these will you know communicate with this and this will communicate with this and the benefit is
1:22:31: like you know if a v crashes is not going to bring the entar thing down it's the same thing you know
1:22:37: in a web browser if you ever if you ever had your browser tab crash this is kind
1:22:42: of similar principle it crashes just the tab instead of crashing the whole thing similar thing we might be able to I'm
1:22:49: not promising this right now but we might be able to design this in a way where if the renderer crashes
1:22:55: well just real on [ __ ] you'll still stay in the world that you're in your visuals are just going to know go away for a bit
1:23:00: and then going to come back so we can like you know reboot this whole part without bringing this whole thing down
1:23:05: and of course if this part comes down you know then it's over but then you have to restart uh but by splitting into
1:23:13: more modules you kind of you know you essentially eliminate the possibility of
1:23:19: crashing because this part will eventually be doing relatively little it's just going to be know Cor in the
1:23:24: different processes but for the for the first part we're just going to move you know for extension into separate process out of
1:23:32: unity that's going to give us big benefit thanks to that n um there's other benefits because for example unit
1:23:39: unit is garbage collector is very slow uh and very CPU heavy with net 9 has way
1:23:45: more performant one as well we'll be able to utilize you know new performance benefits of like net 9 like in the code
1:23:52: itself because it will be able to start you know using new functions with INF engine because now it like you know we
1:23:57: now we don't have to worry about what Unity supports um following that uh the next
1:24:04: big step is probably going to be switch to Sauce so we're going to you know replace Unity with sauce and at some
1:24:10: point in the future we'll do like more splitting um you know for fruit engine into more separate processes to improve
1:24:16: stability and also at sandboxing because once you can of do this you can sandbox
1:24:21: this whole process using you know the operating system soundbox in Primitives which will
1:24:27: improve security um so that's kind of like you know the general kind of over plan what
1:24:33: you want to do you know with the architecture of the whole system and it's been like heavily like I've been
1:24:38: like reading a lot you know how Chrome and Firefox did it and Firefox actually did a similar thing where they used to
1:24:44: be like a monolithic process and then they started like you know doing work to break it down into less processes and
1:24:50: eventually did you know just two processes and then it kind of broke it down into even more
1:24:55: and we're essentially going to be doing similar thing there so I hope this um
1:25:00: this kind of an anwers it gives you better idea of what we want to do with
1:25:06: performance uh you know for forite and what you know what what are the major
1:25:11: steps uh that we need to take and also explains why we are actually re working
1:25:16: in the particle system and audio system because on the surface it might seem you
1:25:22: know there's like why we re working the particle and audio system when we want to know more
1:25:28: performance um and the reason is you know just so we can kind of show them fully into F engine make them kind of
1:25:34: mostly independent like you know of unity and then we can pull the F ex engine out and that's the major reason
1:25:41: we're doing it the other part is you know so we have our own system that we kind of control because once we also
1:25:47: switch Unity for Source if the particle system was still in unity Source would have to re reimplement it and it also
1:25:53: complicate this whole part because like no now we have to like synchronize this particle system with
1:26:00: all the details of the particle system on this end um so that's that's another benefit uh
1:26:08: but there's also some actual performance benefit even just from the new particle
1:26:13: system uh because uh the new particle system is designed to be a synchronous
1:26:19: which means if you do something really heavy you're only going to see the particle system like and you will not
1:26:24: like as much because um the particle system if it's not if if it doesn't finish its computations within you know
1:26:31: specific time is just going to skip and you know render the previous state and the paral system itself will
1:26:39: like but you will't like as much so that should uh help improve your overall frame rate as well so that's pretty much
1:26:46: the just of it um the partical system is almost done um we'll probably you know
1:26:52: start testing like this upcoming week uh the audio system that's going to be the next thing after that it's going to
1:26:57: be know interface with unity once that is done then the pool happens into the separate process which is going to be a
1:27:04: relatively simple process at that point because every everything everything will be in place you know for it pull out
1:27:11: from unit to happen so hopefully this kind of you know uh gives you much
1:27:16: better idea and if like any questions about it you know feel free to ask we always happy to kind clarify how this is
1:27:22: going to work um I'm going to go big boink there we go uh I'm going to
1:27:34: down there we go so there was that was another of those kind of rid hard questions I kind of do this explanation
1:27:41: on my first episode but uh the I kind of wanted to do it because I have like a little bit better setup you know with
1:27:47: the right things so we can make also like a clip out of it so people so we have something to refer people to but
1:27:54: yeah uh that's that's the answer to n's question I'm also checking time I have like about 30 minutes left um so let's
1:28:02: see do we have there's a few questions but I think I can get through them uh this actually kind of working
1:28:07: out I've been like worried a little bit because I'm taking a while to answer some of the questions and going on tangents but uh um uh seems to kind of
1:28:16: work out with the questions we have so next uh Shadow X does all common
1:28:21: spotting software and oh I already asked sorry I already answered that one um J V for
1:28:29: uh so VM is kind of like optimization layer rather than something AK to C or Chrome V8 so it has the fundamentals of
1:28:34: a VM but goal is to just a ver I know ahead of time what needs to be run pipeline quickly it's I mean it's it's
1:28:40: the same general Principle as like you know as the as the CLR or chrom's V8 VM
1:28:46: is essentially it's just an environment in which the code can exist and which in which the code operates and the way the
1:28:53: VM you know runs that code can differ some VMS you know they can be purely
1:28:58: interpreted you literally you know maybe you just have a switch statement there is just switching based on instruction
1:29:03: and doing things maybe it does you know some kind of like compiles into some some sort of as and then you know
1:29:10: evaluate stad or maybe it takes it and actually emits you know machine code for whatever architecture you're running on
1:29:16: there's lots of different ways for VM to execute your code um so the way protox
1:29:22: executes code and the way or V8 execute code is different I think actually V8
1:29:29: like I think it do like a hybrid where it like it kind of converts some parts into
1:29:34: machine code and like some it kind of interprets but it doesn't interpret the original like you know types code
1:29:40: interprets like some of the you know uh abstract syntax three um I don't fully
1:29:46: remember like the details but I think like V like does a hybrid where we can actually have kind of both Seer that
1:29:55: yeah S I think always translates it to machine code but one thing they did introduce uh with the latest like
1:30:01: versions is they have multi- tier rigid compilation so one of the things they do
1:30:08: is like when you code runs they willit compile it into machine code which is actually you know native code for your
1:30:15: CPU and um they they just run it like they should compile it like you know
1:30:21: fast because you want to you want you don't want to waiting you know for the application to actually run but that
1:30:27: means they cannot do as many optimizations what they do though is like when they like when theit compiler
1:30:33: makes that code that's like you know done in a very quick way so it's not as optimal it has like a counter each time
1:30:39: like you know method is called and if it crosses a certain threshold you know say like the method gets called more than 30
1:30:45: times it's going to trigger the jit compiler to compile much more optimized
1:30:51: version but like it goes a really heavy you know on the optimizations to make much more
1:30:57: faster code which is going to take it some time but also in the meanwhile as long as it's doing it it can still keep
1:31:03: running you know the slow code that was legit compiled once the legit compiler is ready it just swaps it out for the
1:31:09: more optimized version um and uh you
1:31:14: know and and at that point your code actually speeds up so we have code that's like you know being called very
1:31:20: often you know like the main game Loop for example it ends up compiling in it in very optimal way if you have
1:31:26: something that runs just once like for example some initialization method you know like when you start up the engine
1:31:31: there's some initialization that only runs once it doesn't need to do heavy optimizations on it because that we just
1:31:37: waste of time it speeds up the startup time and it's kind of optimizes for both and I think I think the latest
1:31:44: version um it actually added um I forget the term for it they used but it's a
1:31:51: it's essentially like in know multi-stage compilation where they look at what are the common you know
1:31:57: Arguments for particle method and then assume those arguments are constants and
1:32:03: it will compile a special version of the method you know with those arguments as constants which lets you optimize even
1:32:09: more because now we don't have to worry can this argument you know be different values and you have to do all the math it can precompute all that math that is
1:32:16: dependent on that argument ahead of time so it actually runs much faster and if we have a method that is often times
1:32:23: called with um very specific arguments it now runs much faster and there
1:32:29: actually another jit another VM that did this called the Lua jit which is um like
1:32:35: like runtime for the laa language and what was really cool about that one
1:32:42: is like um even so Lua you know it's just considered this kind of scripting language Lua jit it was able to
1:32:49: outperform languages like C and C++ in some benchmarks because
1:32:54: with CN C++ all of the code is compiled ahead of time so like you you don't know
1:33:00: actually you know what kind of arguments you're getting what L was able to do is like be like okay this value is always
1:33:07: an integer and or maybe this value is always integer that's you know number
1:33:12: 42 um so like I'm just going to compile a method that assumes this is
1:33:18: 42 and it makes like super optimal version of the you know method when when and that one rounds
1:33:25: like you know it's even more optimized you know than the CN C++ because CN C++ cannot make those
1:33:31: assumptions there's like some like you know I know there ex is like you know there like profiling compilers where
1:33:37: actually run your code and you know they will try to also figure it out and then you compile your code um you know with
1:33:43: those kind of profiling optimizations and can do some of that too but um it
1:33:48: kind of shows you know like it like there's some benefits you know to the compilers where they can be kind of
1:33:55: more adaptive and they kind of they can kind of you know do it for free because you don't you don't have to
1:34:01: like um you don't have to like you know do it as a part of your development process once you kind upgrad in your
1:34:07: system it just kind of you get all these kind of benefits and it's and it's able to run
1:34:13: your code that's the same exact code as you had before it's able to run it much faster because it's a lot smarter how it
1:34:20: converts it into machine code um so next question is Nitra is asking uh
1:34:28: how about video players are they already full inside for extension as well no for video players they actually exist more
1:34:34: on the unity side and they are pretty much going to remain majorly on the
1:34:39: unity side um the reason for that is because the video player it has a very
1:34:44: tight integration with the unity engine because it needs to update you know the GPU textures with the decoded video data
1:34:52: um it has a bit like you know mechanism because we will need to send the audio data back for the F engine to like
1:34:57: process and send back um so that's going to be um like like
1:35:05: video players is essentially going to be considered like you know as an asset because even like you know stuff like textures when you load a texture you
1:35:11: know it needs to be uploaded to the GPU memory it needs to be sent to the render and the way I plan to approach that one
1:35:18: is you know through a mechanism called shared memory where the texture data itself FRS engine will locate in a
1:35:24: shared memory and then it will pass you know over the pipe it will pass it will essentially tell the renderer here's
1:35:31: shared memory here's the texture like you know information like the size format and so on please upload it to the
1:35:37: memory under you know this handle for example and it assigns it you know some kind of like number to identify the
1:35:44: texture and essentially sends that over to the you know unit engine unit engine is going to like you know read the texture data and upload to the GPU and
1:35:52: it's going to send a message back to fr engine be like you know texture number you know 420 has been you know uploaded
1:35:59: to the GPU and fr engine knows okay this one's now loaded and then when it sends it please render these things it's going
1:36:04: to be like okay render this thing you know with texture number 422 and it's going to you know send it
1:36:10: as part of it package to like unity and unity will know okay I I have this texture and this number you know and
1:36:17: like it's going to S like prepare things it's going to be similar thing with video players where the playback and the
1:36:23: coding happens on the unity side um and for ex just sends you know some like basic information to kind of update it
1:36:30: you know be like you know the position should be this and this you know like you should do do and do this and this
1:36:36: with a playback and it's going to be sending back you know some audio data uh but yeah those parts those parts
1:36:42: are going to remain like within within like Unity uh so quite a bit like we have
1:36:49: like 25 minutes um and there's only like one question right now
1:36:55: um uh J ven I guess something I don't really understand about reson is where
1:37:00: simulation actually happens the server simulate everything and the CLI just pull or the clients do some work and S
1:37:06: to the server it's a mix for example players on ik local perlex or do all servers and clients simulate everything
1:37:12: local as defined perlex can get pretty confusing pretty fast um so usually it's
1:37:18: a mix but like the way uh the way like a resonite works so like fruits engine
1:37:24: works is by default everything's built around your data model and by default the data model
1:37:31: is implicit and synchronized which means if you change something in the data model FRS engine will replicate it you
1:37:38: know to everyone else um and the way you know most things like the components and stuff works is
1:37:44: the DAT data model itself um it's sort of like like an AO
1:37:50: things you know it's like the data model says you know this this is how things should be and any state you know any any
1:37:57: state that's uh ends up you know representing something like it represents you know something out of
1:38:03: visual some state of some system that's fully contained within the data model and that's the really important part the
1:38:10: only thing that can be like local to the components by default is you know any caching data or any data you know that
1:38:18: can be deterministically computed from the data model uh if the data model changes it
1:38:24: doesn't matter what internal data it has the data model says you know things should be this way um and then like the whole
1:38:32: synchronization is built on top of that like the whole idea is like you know by default you don't actually have to think
1:38:38: about it like you know like the data model is going to handle the synchronization for you it's going to
1:38:43: resolve conflicts you know it's going to resolve like if you know say multiple people change a thing or if people you
1:38:49: know change a thing they're not allowed to it's going to you know resolve those data Chang
1:38:55: um and you essentially build your thing to like respond to data so if your data is you know guaranteed to be
1:39:01: synchronized and conflict resolved then the behaviors that depend on the data always lead to the same you know or
1:39:08: these conversion results um what this does it kind of gives you you know benefits to write
1:39:14: systems all over different ways and but like the the main thing is like you
1:39:21: know you don't have to worry but synchronization like it it just kind of
1:39:26: happens automatically um and it kind of changes the problem where instead of um instead
1:39:35: of like uh what do I put
1:39:41: it so instead of instead of um you know things being syn versus
1:39:47: noning be a problem we have to think about it turns it into an optimization problem because you could have you know
1:39:53: people like Computing multiple kind of things or like Computing it on the wrong end things that can be computed from
1:39:59: other stuff and you end up wasting Network traffic as a result but for me
1:40:04: that's much better problem to have than things getting out of sync what we do have is like the data
1:40:11: model has mechanisms to optimize that one of those mechanisms being
1:40:16: drives um drives essentially tell like the drive is a way of telling the data
1:40:22: model I'm taking control you know of this part of data model don't synchronize it I am responsible for
1:40:29: making sure you know this stays you know consistent when it needs
1:40:35: to um and a way to think about Drive is you know you can have something like a smooth LP node which is you know like a
1:40:42: one of the community favorites and the way that works it actually has like its own internal computation that's not
1:40:47: synchronized whatever input to smoth Smo is is syn generally synchron IED because
1:40:54: it kind of comes from data model but the output doesn't need to be synchronized because it's convergent so like you're
1:41:00: guaranteed to have the same value on the input for all users you can fully compute the output value on each user
1:41:07: locally because it's all converging towards the same value um and as a result everybody ends
1:41:15: up you know with if not the same at Le very similar you know result on their
1:41:21: end it is also possible you can if you want to diverge this data model uh for
1:41:27: example value user override it does this where it has like but it kind an
1:41:33: interesting way because it makes the the dares values it actually makes it part of data model so like the values that
1:41:39: each user supposed to get is still synchronized and like everybody knows like this user should be getting this
1:41:45: value but the actual value that you drive it it you know that's diers for
1:41:51: each user and it's like a mechan mechm built on this principle you know to handle this kind of scenario you can
1:41:58: also you know like sometimes dve things from things that going to be different on each user and have each user see a
1:42:04: different thing you can diverge it the main point is um it's a deliberate
1:42:10: choice at least it should be in most cases unless you know you do it by accident but we'll you know try to make
1:42:15: um make it harder to do it by accident uh because if you if you're like you know thinking I'm going to
1:42:21: specifically make this thing this thing you know it's like it's much less likely you know
1:42:28: to happen by accident you know like either by bug or like misbehavior the systems designed in a way to make sure
1:42:34: that everybody shares you know the same general experience
1:42:40: so generally like uh if if you for example considered ik like you mention
1:42:46: like ik um AK like the actual computations you know like of the bones
1:42:52: that's computed locally and the reason for that is because the inputs uh the inputs to the ik is you know the data
1:42:58: model itself which is synchronized and the realtime values are you know like hand positions if you have you know
1:43:04: tracking it's feet positions head position those are synchronized and
1:43:09: because uh each user gets the same input to the ik the ik on everybody's end ends
1:43:15: up Computing you know the same or very similar result therefore the actual AK
1:43:20: itself doesn't need to be synchronized because um you know it's uh the final
1:43:26: positions like they are driven and essentially that is a way of the ik saying you know if you give me the same
1:43:33: inputs I'm going to produce you know I'm going to drive these bone positions to match those inputs in a somewhat you
1:43:39: know mostly deterministic way so that doesn't need to be you also mentioned
1:43:46: local prot flux uh with local prot flux um there is actually a way for you to
1:43:51: like hold some data that's outside of the data model so locals and stores they
1:43:57: are not synchronized if you drive something from those um it's going to
1:44:02: diverge unless you unless you take the responsibility of computing that local in a way that's either convergent or
1:44:08: like in or deterministic um so locals and Source they're not going to give you syn
1:44:15: mechanism one thing that's missing right now what I want to do to prevent you know Divergence by accident is uh have
1:44:21: like a localness analysis so if you if you have like a a bunch of protox and you try to drive
1:44:28: something and it's going to check is there any is that the source of this
1:44:34: value is there anything that is local and if it finds it is going to
1:44:39: give you a warning you're trying to drive something from a local value unless unless you make sure that value
1:44:46: is you know somehow like unless the results you know of this computation stay synchronized you know even if this
1:44:52: value differs uh or if you make unless you make sure that the local value is you know the computed the same for every
1:45:00: user or it's very similar this is going to diverge and that will make it you
1:45:06: know much more deliberate choice where you're like okay like I'm no not doing this by accident I really want to drive
1:45:12: something from a local value I'm taking responsibility of making sure this will much for users or if you have a reason
1:45:19: to like you know diverge things for each user is a deliberative choice you're saying I want this this part of data
1:45:26: model to be diverged um so that kind of like you
1:45:31: know answers the question there's like a fair bit like uh I can do to this but the just of it the local imperor FL like
1:45:39: it literally means like this is not part of the data model this is not synchronized for you this is a mechanism
1:45:46: you use you know to store data outside of the data model and if you feed it back into the data model you sort of
1:45:52: need to come you know responsibility um for making sure it's either you know convergent or
1:45:59: intentionally Divergent um the other part is uh this also applies if you drive something
1:46:06: because if you if you use the local and then you write the value and the value is not driven the final data the final
1:46:12: value right into the data model ends up implicitly synchronized and you don't have the problem so this only applies if
1:46:19: you're driving something as well so so I hope that kind of like you know helps uh understand this a bit
1:46:25: better uh just about 14 minutes left uh there's like one question uh I'll see
1:46:31: how long this kind of takes to answer but I at this point uh um I might not be
1:46:36: able to answer like all the questions if more prop up uh but feel free to ask them I'll try to answer as many as
1:46:41: possible uh until I get you know at full two hours um so let's see uh oie is asking
1:46:50: you mentioned before one thing to do cing as a dependent after particles is this something you want to do um I kind
1:46:57: of it is an optimization that's sort of like you know independent from the whole move uh that I feel could be fast enough
1:47:05: I still haven't like decided like I'm kind of thinking about like you know slotting in in before the audio as well
1:47:10: as part of the performance optimizations because uh that one will help
1:47:16: particularly with like memory usage CPU usage like during loading things uh you
1:47:21: know for example when people like you know load into the world and you get kind of like that loading you know like
1:47:26: that kind of lag as the staff loads in um the cascading as said like uh dependencies they can partic help with
1:47:33: those cases when the users have a lot of stuff that's only visible to them but
1:47:39: you know not everyone else because right now you will still load it um there's um there also like other
1:47:46: parts like if you have like Wars that are cold you know or maybe the users are cold you load all of them at once and
1:47:51: it's just you know this kind of big chunk with this system it would kind of more spread out um the part I'm like not
1:47:58: certain about like is whether it's worth doing this now or doing this after like you know make the net n switch because
1:48:04: uh it is going to be beneficial on both so it's like it's one of those optimizations that's smaller and it's
1:48:10: kind of independent you know of the big move to net nine and it could provide
1:48:15: benefit you know even now even before you know moved on at nine and it will still provide benefit afterwards so I'm
1:48:23: not 100% decided um on this one I'll have to like evaluate it a little bit um
1:48:28: and evaluate you know how other things are kind of going it's something I want to do but uh no hard decision
1:48:36: yet uh next uh climber is asking could you use the red highlighting from broken
1:48:42: perlex in a different color for local computation I don't really understand
1:48:48: the question I'm sorry like if Pro is like red and it's broken like
1:48:53: you generally like you know you you need to fix whatever is broken before you start using it again usually like if
1:48:59: it's red it's like there's something wrong I cannot run at all
1:49:04: so like I don't think I should be like you know using it anyway like if if it's
1:49:09: red like you need to fix whatever the issue is
1:49:15: um next uh ner is asking are there any plans to open sore certain parts of the
1:49:21: code for example on the components and perlex know so that committee can contribute to those so uh there's some
1:49:28: plans uh nothing's fully formalized yet um we've kind of had like some
1:49:33: discussions about it so um I'm not going to go like too much super into details
1:49:39: but my general approach is like uh we would essentially do kind of gradual
1:49:44: open sourcing you know of certain parts especially ones that could really benefit from Community
1:49:49: contributions um one example a I can give you is you know uh for example the
1:49:55: Importer and exporter system where um and also like the device driver like
1:50:01: system where it's very what's the term like ripe for like
1:50:07: open sourcing but essentially say you know this is the model importer this is you know uh volume importer and so on
1:50:14: this is our you know and this as you know this is support for this format this format um and we cannot do this you
1:50:21: know like we we make this like uh we make we make the code you know open
1:50:26: which will all for Community contributions so people could you know contribute stuff like you know fixes for like formats or some things importing
1:50:32: wrong or alternatively you know you want to add support for some obscure format that we wouldn't support our like you
1:50:39: know ourselves because it's you know you're modding some kind of game or something and you want to like you know
1:50:45: mess with things um you know you now you can use the implementation that we
1:50:51: provided as a reference how another you know importer exporter um there's uh or like if you
1:50:59: want to like you know you need to like you know very specific fixes that are relevant to project you're working on
1:51:04: just make a fork of one of the you know one of ours or even communities and like you know modify for the purposes and you
1:51:11: make changes that wouldn't make sense to have like you know in the default one but like that are useful to you so this
1:51:18: probably you know the kind of model I would want to like follow at least like initially we open source like things like partially uh where it kind of Mak
1:51:25: sense uh but it's like also easy to do because like open sourcing that can be kind of complicated process like if you
1:51:32: want it for everything because you know there's pieces of code that have like certain licensing you know and we need
1:51:38: to make sure like it's all compatible with the licensing to make sure you know everything's kind of audited and cleaned
1:51:43: up um so doing it like you know by chunks doing just like some systems I feel it's much more easier approachable
1:51:51: way to start and we can you know kind of build from there the other part of this
1:51:56: is when you do open S something you generally need like maintainers um and
1:52:03: right now like we don't really have like super good process you know for um you
1:52:08: know handling kind of community contributions for these things um so that's something I feel we also need to
1:52:14: heavily improve and that means like you know we kind of need to have prepared some manpower to like you know look at
1:52:20: Community like P request make sure we have have a good kind of communication there and make that whole process kind of run smoothly and been like you know
1:52:27: some PRS that like piled up against you know like some of our projects uh like
1:52:33: some of our open source parts and that we haven't really had chance to prare look at because everything has been kind
1:52:38: of busy so I'm a little bit like hited to do it like you know now at least
1:52:44: until like you know we clear up some more things and we have like better process so it is also like you know part of the consideration there but overall
1:52:52: it is something I'd want to do like I I feel like you know like um you as the community like like people are doing
1:52:58: lots of cool things and tools you know like the moding community they do lots of really neat things um and you know
1:53:06: doing the gradual kind of open sourcing I feel this a good way to empower you more you know to give you kind of more
1:53:12: control over these things give you more control to fix some things because there's also like Parts you know like where we're small theme um so sometimes
1:53:21: like a is very limited to fix like n issues and if you give people you know the power to help contribute those fixes
1:53:29: um I feel like overall you know the platform and the community can benefit from those uh as well as giving you a
1:53:34: ways to like you know um motive like because part of like resonite philosophy
1:53:40: is you know giving people as much control as possible making the experience what you want it to be and I
1:53:47: feel you know by doing this you know if if you really don't like you know do may do something like or how like you know
1:53:53: resonant handles certain things you can kind of fork that part and um you know
1:53:59: make your own version of it like you know or fix up the issues you know you're not as dependent on us um as you
1:54:05: otherwise would have been um the flip side to that is like and and part that
1:54:10: I'm like usually kind of worried about is we want to also do it in a way that
1:54:16: doesn't result in the platform you know fragment where everybody ends up you know on a different version of the build
1:54:23: and then like you know you don't have this kind of shared Community anymore because I feel especially at this stage
1:54:29: you know that can end up hurting the platform um if it happens especially like too
1:54:35: early and and it's also like one of the reasons you know I was kind of thinking going with uh importers and exporters
1:54:42: first because those will pretty much they cannot you know cause fragmentation
1:54:47: because they do not fragment the data model they just change how things are bur into or out of the data model um but
1:54:56: you know the actual behavior is like it it doesn't make you incompatible in know other clients you can have like you know
1:55:01: importers for data formats that only you support um and still exist you know with
1:55:07: the same users and be able to you know join the some sessions um there pretty much kind of
1:55:12: you know kind of on the whole like kind of op sourcing kind of thing uh and this the reason I kind of want to approach it
1:55:18: this way is like you know take baby steps there see how it will see how like you know how everybody kind of responds
1:55:24: so say see how we you know are able to kind of handle this and then kind of you know be like okay like we're comfortable
1:55:31: you know with this we can now you know take a step further open source more parts you know um and just kind of you
1:55:37: know making it a gradual process then like you know just big flip of a switch do that make
1:55:45: sense uh so we have about 4 minutes left
1:55:50: uh there's no more questions so like I might um I don't if like this an a for Ramble because right now um
1:56:00: uh I don't know like what with ramble about uh because um I do like more gring
1:56:07: plus I've already gramble Bas plus a fair bit if there's any like Las Min question I'll try to answer it but I
1:56:12: might also just end up like a few minutes early um and as I end up rambling about rambling which is some
1:56:19: sort of like you know metha rambling which I'm kind of doing right now I don't know what's I'm actually kind of
1:56:25: curious like like everybody like um v v spting is that something you would like
1:56:31: to see like you like to like you know play with uh especially if you can like you know bring stuff like this I can show
1:56:39: you uh I don't have like too many videos of those I have like one more no wait oh
1:56:47: I do have actually one video that I want to show uh I do need to
1:56:53: fetch this one from YouTube because I haven't like imported this one in uh let's
1:57:00: see oh also like um actually let me just do this first before I start doing too
1:57:05: many things at once there we go uh so I'm going to bring this one
1:57:16: in what once it loads as one's from YouTube
1:57:21: so it's like actually a bit worse quality uh but this is another gas ins spot that I well I have vogs but I need
1:57:27: to make videos of more um this one I found like super neat because like this
1:57:32: a pretty complex scene this is from GSR at blfc um you can see like it captures a
1:57:38: lot of like kind of cool detail but there's a particular part I want you to pay attention to um I'm going to point
1:57:45: out in a sec because one of the huge benefits of gasan plus is they're really good at not not only you know soft and
1:57:51: fuz details but also you know semi-transparent stuff uh so watch this
1:57:58: thing do you see like these kind of plastic well know plastic but like these transparent you know bits do you see
1:58:04: these look at that it's actually able to you know represent it like really well
1:58:09: which is something you really don't get you know with the traditional mesh based you know photogrametry
1:58:16: um and that's actually one of the things you know like if you wanted to represent as a mesh you would kind of lose that
1:58:22: uh and that's you know why why G scenes are really good you know presenting scenes that direction photog is not um
1:58:30: and I found like really neat and there have also lots of like you know splas I done um like uh this October I was like
1:58:37: visiting in the US and I went to the USTA national park again um I have lot
1:58:42: of scans from there um and like you know lot of them kind of have like you know some because there's a lot of Kaisers
1:58:48: and everything and there's actually know there's like steam coming out of the Kaisers you know and and there's like you know water so like it's reflective
1:58:54: in some places um and I found that Gan plus that actually reconstructs pretty
1:59:00: well like it even captures you know some of the Steam and air and gives it you know more of a volume so they're like a
1:59:06: really cool way you know of like representing those scenes and I just kind of want to be able to you know bring that in and be like you know
1:59:13: publish those and res and be like you know want you want to come see like you know bits of like you know Yellow Stone
1:59:18: bits of desent De you know like just go to this world and you can just view it and show it to
1:59:24: people um but yeah that that that actually kind of fill time nicely
1:59:30: because uh we have about 30 seconds left so uh I'm pretty much going to end it
1:59:35: there so thank you very much uh for everyone you know for your questions thank you very much for like you know watching and uh uh you know and
1:59:42: listening like the my ramblings and explanations and going off tangents uh I
1:59:47: hope you like you enjoy this episode and um than you also very much you know for just like supporting resite
1:59:54: whether it's like you know like on our patreon uh you know whether it's just by making lots of cool content or you know
1:59:59: sharing stuff on social media like whatever whatever you do um you know it
2:00:05: it helps the platform um and like you know we appreciate it a lot so thank you very
2:00:11: much and I'll see you next week by
2:00:16: bye Warp