I miss being able to interact with the game scene while the game is running like in Unity. It’s a big productivity boost being able to just zoom out and grab and move objects. In Godot you have to open the remote tree and interact with values manually which can be a pain for debugging.
New nodes/interfaces for adaptativ music/sound design.
“FilePath”. Has PackedScene’s functionality where the path to the file is updated when the file is moved in the editor.
I would use it in place of a string to the next level being loaded. I tend to add subdirectories as I create new worlds. Manually updating strings isn’t so bad but when the fuctionality seems like it’s right there… it sucks. Can’t use PackedScene as each level would require loading the next level and can cause cylic references.
Ez ray tracing, because ray tracing is cool.
Good answer. Especially if it could degrade gracefully for low performance, without temporal artifacts. E.g., have ray-surface hits act like point projectors with approximate visibility, so indirect lighting splashes brightness on a soft region instead of creating a hotspot.
I think there’s a general solution to noise that’s gone unused.
Okay, Metropolis light transport jiggles steps in each bright path, to find more low-probability / high-intensity paths. Great for caustics. But it’s only used on individual pixels. It samples a lot of light coming in to one spot. We want the inverse of that.
When light hits a point, it can scatter off in any direction, with its brightness adjusted according to probability. So… every point is a light source. It’s not uniform. But it is real light. You could test visibility from every hit along any light path, to every point onscreen, and it would remain a true unbiased render that would eventually converge.
The sensible reduction of that is to test visibility in a cone from the first bounce offscreen. Like if you’re looking at a wall lit by the moon, it goes eye, wall, moon, sun. Metropolis would jitter to different points on the moon to light the bejeezus out of that one spot on the wall. I’m proposing to instead check moon-to-wall visibility, for that exact point on the moon, but nearby spots on that wall. (Deferred rendering can skip testing between the wall and your eye. Pick visible spots.)
One spot on the moon would not accurately recreate soft moonlight - but Blender’s newer Eevee renderer proves that a dozen can suffice.
One sample per pixel could be a million.
You don’t need to go from all of them, to every point onscreen, to get some pretty smooth results. Basically it’s “instant radiosity” but with anisotropic virtual point sources. It’s just a shame shadowing still needs to be done (or faked) or else applying them would be doable in screen space.
A boring alternative interface would be nice. Properly Windows 95 fare: element hierarchy made crystal clear with relief-shaded borders, fat margins on any nested elements, labeled-by-default icons, “multiple document interface” instead of modes or tabs. Look. Blender’s a lost cause. 3D software is universally going to be inscrutable, because you’re decorating a room through the keyhole. But sometimes this software has a learning curve like the white cliffs of Dover, and even after learning the right answer, I cannot imagine why that’s the right answer.
This is unlikely to happen mostly because the worst decision software can make is two decisions. Established users don’t want to deal with the ambiguity of having two non-overlapping ways to do everything. Developers sure as heck don’t want to do everything twice just to make it-- not even simpler or easier, but more “discoverable.”
What’s more plausible is recapturing how Flash worked on the web. Some one-line / one-file way to drop in a whole-ass game. Even if that means cramming every asset into the same bulging JS file. One that looks for an appropriate rectangle of Canvas to emit graphics, and if it doesn’t find one, offers to create one anyway.
I guess the hard part in that case is trying to prevent what Flash on the web did for advertisers. A lot of neat browser features are effectively impossible because of an arms race by greedy idiots.
Honestly I’m the opposite, I want godots window manager to be more like blenders
Blender just does it so flexibly, and the minimum size of the panels isn’t actual continents in size, so I can actually shove stuff out of the way that I don’t need instead of having between 30%-60% of the screen being wasted at all times
Godot has improved this a bit in 4.1, since now you can pop out the scripting and shading workspace into new windows, but it’s not nearly universally implemented like blender, you can’t pop out the animation editor, the audio buses, the output panel, the debugger, and absolutely worst of all is that you can’t pop out the viewport, even if you press the “distraction free mode” button, the bottom panel still eats like 20%-30% of the screen
Which means there’s no way to quickly preview what the games gonna look like at corner-to-corner full screen without launching the game and waiting for it to load
This is even worse if you need to have multiple viewports open, because they all share that same limited space with no option to pop one out to a second monitor
GD’s window manager is just incomplete compared to Blender, and I far prefer Blender’s way of doing things
Blender’s UI regions are a thing to behold. Never seen any program as flexible and yet easy to customize the UI. It’s a thing of beauty and it’s a damn shame it hasn’t been copied.
Exactly! Beautifully said