Following one man's task of building a virtual world from the comfort of his pajamas. Discusses Procedural Terrain, Vegetation and Architecture generation. Also OpenCL, Voxels and Computer Graphics in general.
The level of detail is determined in two dimensions correct? How much of a load do miscellaneous underground cave systems generate? I know in a previous post you were stating how you managed to cut down on the load for polygons that aren't in view, but it appears subterranean features are still being generated to some degree. Also, if you are flying high above the ground, is the high density directly below the camera necessary?
The clipmap is fully 3D, so you have LOD switches on any dimension. The engine does not make a distinction between what is horizontal or vertical.The caves are still generated, but at lower LODs according to how distant from the camera they are.If you are flying over the ground, the high density below the camera is not needed. What you see in this video is not the same as if the camera was flying. The clipmap center remains on top of the terrain here. This is for showing the clipmap rings at higher resolution. If they clipmap had moved along with the camera (like you would have in a regular application), these high res rings would be very hard to see because they would contain mostly air.
Ahh cool, so it is a full cubic lod map. Thanks for the clarification.
Great LOD improvements Miguel! I've been anxiously awaiting your next post, as I'm sure many others have too.The LOD has come very far, and the amount of work you've put into it has really paid off! The silhouettes are maintaining VERY well, and as you noted in the video, that was the most problematic part of the previous LOD iterations.This is great for the engine, and I think it is very usable at this point. The tree crowns still have a bit of trouble at one or two LOD points, but that's just nit picking... it is awesome now!The sub materials do enhance things, the ground with mixtures of pebbles and rock was very good looking. I would love to have you do a short video just discussing this area, as material generation and coding for sub materials will be fun to understand.Also, I have to say the grass and small plants code looks better to me too (who knows you probably didn't even change anything there), the patches and placement are looking great and working really well with the terrain, as far as I could see.SUGGESTION: I'm sure the tech/talkie videos take a good amount of time when you are trying to discuss or show the technical content. When find you can't do that due to being busy, perhaps you could just load a random world and walk or fly around for "eye candy sake" for 1-2 minutes, just slowly looking around and exploring? I could have watched an entire video looking at that vista at 0:22, looking down the mountain towards the mountains in the distance. Seeing the sky, trees, atmospheric effects and the terrain all combined is breathtaking, and will go far to maintain everyones excitement for your work, even if it's not t tech video!
Thanks, yes it is getting there :)
Oh another question. Are you going to implement a configurable LOD distances? It would be nice to see how much gritty detail you can get from your engine on a really high end system (or on systems yet to come).
Yes this is something I can already configure.
That is awesome!
So just for clarification, Material in this case, what does it include? Does it include textures, shape and variance?(As in, how much the shape can vary from the original)
These results are impressive, as always!Do you plan to add procedurally generated roads or animal/human paths to make the world easier to explore ?
It WOULD be interesting to have procedurally generated roads, for example between those towers (for now). The only possible problem with roads, is that roads generally connect two locations, however, those locations can be outside of the world-generation-range (As the world is being generated on the fly) Thus, you might have a road, which connects an existing point with a not-yet existing point, how would you solve that? One way, is to instead of making roads based on the points of interest, add points of interest based on the roads =P.
I don't think it would be that hard, cave systems already generate to locations outside of the world generation range.
Not quite what I meant, lets see... how do I explain this?...The cave system (most likely) it will create a cave bit at some point, then when you walk towards it, and a new chunk of world is made, it will notice: Hey, this cave isnt finished, lets add another bit =D. and it will add another bit, perhaps checking how big it is, and eventually, at some point (randomly chosen through chance?) it will create an end to it, whether this is through a corner connecting two cave-ways or just a dead end.So the end point isnt known until the cave reaches it to the system, or it is just decides on some point which has nothing to do with anything other than the cave system outside of the generated world.Roads however, are dependant on wherever they are going to. So they have to know where the locations of interest are outside of the generated world, before they are generated. In this case, the system could work roughly like the cave system between point A (where you stand) and point B (The place outside of the generated area where the road goes to) But its end location is has to be set for it to reach its location. If it completely worked like the cave system, you could have roads ending kilometers away from an area which is supposed to be connected by a road, like a city. The only areas which would have roads connected to it, would be the sides of the areas of interest facing away from you upon generation.I hope this clears it up =P.
Yes, caves and roads are not exactly the same. In my case I can know if a given spot has caves or not without looking at neighbor points. This is what I call a "local" method. I do not generate caves in the way Kamica has described, but he is right about roads implying some sort of globality. It is the same with rivers and lakes. You could be looking at a river in a valley that starts many kilometers up in the mountains.This does make their generation more difficult, still it is doable. I think I will be covering this in the near future.
As soon as I read "Globality" I immediately thought: You don't necessarily need detail, you could make world-generation LOD... sort of, creating more detailed world generation the closer you get to something, so for example at nearest level, you produce all the visible detail, such as trees, hills and individual buildings etc. Then at a further away level, you could just have indicators of "Here is a city", maybe even it's rough size, and "Here is a forest" etc. Then you can have the short ranged world generation work on the details if the player comes close.But I bet you already have plans.Also, I am glad to hear that you will be discussing such stuff in the near future =D. Looking forward to it.
I'm also looking forward to seeing how you're going to handle water flow. To do water 'properly' (ie, not like the minecraft version where a river will only flow a short distance from its point of origin) seems insurmountable - you can't 'look ahead' an indefinite amount looking for a source that may or may not exist.... interesting...
These are some great improvements, but it looks like the video quality got eaten by the compression algorithm. I'd love it if you could re-export and re-upload!
Yes, I do not know what is going in. I uploaded a 9GB video to YouTube for just a 5 min clip. Codec is H.264, High Profile, bitrate 237,415 kbps. This is way a lot more info that what google needs to stream down HD back. Maybe anyone out there can tell me what I am doing wrong. Or is it that YouTube just does not like me?
maybe youtube should use different methods of compression for different content types? I don't really know how it would work, but count me in as another eagerly waiting to be able to see the wireframes in this video. If you find the time, a couple screenshots would be wonderful!!
I think the problem is I was encoding with only one pass. I am trying now with two passes, should be better. I have a new video uploading now. In a few minutes will know for sure.
Nope. Tried with two passes, 50000kbps bitrate. Still looks like crap. I am encoding in After Effects, maybe it is related to that. The video I am uploading has all the detail in it... HELP!
That seems to imply YouTube upload settings but I've had no experience with that, sorry. Your older videos turned out higher quality on YouTube, is there anything that changed in the meanwhile? I wish I was able to help.
Here you go Miguel. I use something similar to this but with Vegas 9: http://www.youtube.com/watch?v=J_7C_5CoPt8Finally got my Google account on here :). Your update is great as usual too. Keep em coming!
Thanks, exactly what I needed.
Miguel: Try using something called MeGUI and the x264 codec before uploading to Youtube. You get much smaller videos and Youtube seems to like the format more.
Thanks will try that for sure.
How far in advanced does the terrain generate? Also, are shadow maps generated every frame? If they are, is it possible to generate higher quality shadow maps off of geometry that isn't being rendered yet? That way far away surfaces can gain extra detail without increasing the processing load too much.
Only what you see is generated. In that sense nothing is generated in advance.Shadow maps are generated when the sun changes position or there is a change in the scene, like the player removing or adding voxels.
That makes sense, does the global shadow map update when voxels are added/changed/removed? or is it just a local change?
Also, is there any benefit to pre-generating geometry before it is required? Can it be used to smooth out lod transitions?
I am doing a full map update. I could isolate the update to the potentially affected area, but the whole thing does not take much anyway. It is all in the GPU.Yes, the benefit of generating features before they are needed is you can push LOD transitions a bit farther. But now this crosses a line, which is you must be able to predict what areas will be needed next. Prediction is always tricky, and it does have a price when you get things wrong.But I think I have figured out I good scheme for prediction. I need to try it yet.
Velocity based prediction?
Velocity does not help much as it is fairly constant. It is mostly the direction. I will post in the future about this.
That's true, I was off dreaming of exploring your worlds with a car/plane.
Are you splitting up generation of various lod densities? Im imagining adding/removing a row of the next higher lod every time you pass a barrier. So if you are travelling north the closest lod would be generating new rows multiple times a second north of you while south of you reverts to the next further lod. Then the next further lod would generate once every few seconds, and further out, every half minute, etc etc. As far as pregeneration goes, this method gives you some directionality.
It looks like you organise the world in "chunks", would I be right in saying that you avoid cracks between the blocks by ensuring that edges which have a vertex near the chunk border are not considered for the mesh simplification? (E.g. in the wireframe part of the video the higher density of triangles looks like a regular grid.)Are the chunks always a fixed size? I can imagine that having larger chunks farther away from the viewpoint would allow better compression of the mesh while the fixed border vertices/edges ensures there are no cracks?
Yes, this is correct. At the moment I do not simplify vertices in the edges. This is because each cell is (potentially) processed in parallel. Any simplification along an edge would have to be shared by two cells. This creates a serial dependency, which affects how efficient the parallel solution is. For now I am betting GPUs can take the additional triangles. The CPU generation is often the bottleneck.Chunks are not fixed size. They get larger as they are farther from the camera.
Thanks for the reply. That's interesting, I have done some similar (although much less ambitious!) work myself, and implemented a similar scheme. Good to know I was not doing something completely daft. Keep up the good work! :)