Friday, October 28, 2011

Getting back into schedule

I haven't been blogging lately because of various assignments and midterms. I now have a brief window of time where I can catch up on both my blog and my own personal time, and then it's off to MIGS.

Modeling Update
The geometry in the model is pretty much final. There may be a few tweaks here and there but the main task these past weeks was unwrapping the model. I wanted to get the assignment done as early as possible because it was due on the same day as the midterm for the engine design course. To my surprise it didn't actually take long at all. Sitting down for a couple of hours and Maya actually didn't give me any trouble whatsoever, it was a miracle! Working very systematically made this a really productive session.

An overview of the entire model, showing the relatively stable pattern of checkers around the entire character. Mudbox will make this more awesome!
UV layout of the character. I've tried to make it so that every shell corresponds to either the left or right side of the body, with a few pieces here and there.

Shaders
I've been poking around in Nvidia's FX Composer, and did find out some pretty neat stuff. First of all, I've actually figured out how to make your own variables such as just floats as well as loading in textures and making it usable with corresponding UI widgets for easier modification. This may sound very simple and kind of dumb, but just figuring out how to get stuff to display and how to properly make variables teaches you a lot about the shader production workflow and how the structure is actually laid out in the Cg/HLSL language.

The very first thing I did is load up a 'Phong_Bump_Reflect' material and looked around in the code to see what was happening. I asked Derek Fullerton for some help with the program, and we figured out how create and modify your own textures/variables. We played around with some of the settings and added some things such as a specularity map to our sphere which resembled the earth. I later tweaked this and used the same texture for an opacity map, and a 'reflection map' where only the water on the earth reflects the image.

You can see that only the water shines and reflects, all using a specularity map.
Here I used the specularity map as the opacity map, and made the value semi-transparent. You can see some odd artifacts to the left side of the Earth, but that's probably an issue with how I am using blend modes.
I realized that if any of our efforts using Mudbox will actually be useful, these shaders will have to do their job properly. One of the hardest and most puzzling things right now for me is figuring out how normal maps work. They work here in the Cg file, but it's been pre-done for me so I can look at how it works. I hope this is one of the things we could possibly go over in either class or tutorial in game engine design. More so of the actual math and theory behind the shader, because I am fairly confident that I could load in a normal map and all the necessary variables, I just wouldn't know what to do with them in code. For example, here is where the magic happens in this Cg shader:

         float3 bump = Bump * (tex2D(NormalSampler,IN.UV).rgb - float3(0.5,0.5,0.5));
    Nn = Nn + bump.x*Tn + bump.y*Bn;
    Nn = normalize(Nn);

I know that the 'Bump' variable is the strength of the bump, the NormalSampler is just the actual normal map in memory, but I'm not sure why the .rgb values of the entire image are getting subtracted by half (the - float3(0.5,0.5,0.5)). I know that the normal is then modified using the normal, world tangents and the binormal (whatever that is) using the normal map's UV values. I just don't know how I would go about getting this information (the tangents and binormals specifically), or what these values actually represent. I guess I'll try wikipedia later, or ask a professor when I am trying to do the homework questions.

Getting this working is quite important because a lot of what we are going to do for the modeling class assumes that we can somehow have normal mapping in our game. Of course, I am also interested in getting them working for just its own sake, and because things will more impressive if I do!

No normal map
Normal Mapped.
You can see the difference a normal map makes in terms of detail, especially so on a character like the one in the above section.
I've also fooled around with some other effects from Nvidia's shader library. Things like the post glow filter added a sort of 'atmosphere' around the earth. It looks off because the atmosphere is still lit where the light is not hitting the Earth. That could be easily fixed by also applying a diffuse value over the expanded geometry, so only the parts that are getting hit with light will respond with the blue glowing edge.

Earth with an atmosphere, albeit a kind of fake looking one.
The other shader effect I tried was post radial blur. This is a really neat 'motion blur' -ish effect, where it looks like you are moving really fast. It has some cool values that you can tweak, like the direction of the overall blur, its starting position when it blurs and how far the blur expands over the screen. This shader uses a similar method as the HDR shader talked about in class, but only in terms of render-to-texture. All it does is render the scene to a texture and apply a blur with the parameters chosen. Still looks cool though.


Programming plans
Working on the homework assignments familiarized me with the engine, but I still think I need to mess around with the code to really get a grasp of what's going on. I guess that's not really an accurate way of representing my confusion. With me, it's more that I don't really know that certain features of wild magic even exist, and when I try to do something they blindside me. I'll have to read the book more, and practice doing random things in wild magic before I can start helping Dan Cortens structure proper UML and try to use the design patterns talked about in class wherever they make sense.

November is our 'go month', where we usually gear up and go full steam ahead. Unfortunately, as hard as we try to get the game started early, there is just either not enough knowledge or time to actually do anything productive. November is around the time where we know enough to actually start doing something meaningful, so the plan is the day after we get back from MIGS, we will start having more regular group meetings and work on classes, art and design aspects of our game. In my point of view this is not necessarily a bad thing. If we focus on learning concepts in the first two months, we are more likely to grasp this knowledge more quickly an efficiently. In doing this, we can design our code and game smarter if we start later, rather than trying to haphazardly hack things together that may or may not make sense right away. At least this is my justification for our relatively late start. I know we learned that starting early and starting small are very good ways of going about building games, but in this case, we are learning as we go. We are building our airplane while we are flying it, so to speak.

No comments:

Post a Comment