3Д БУМ

3Д принтеры и всё что с ними связано

Calculating Face Normals

If you want to have lighting in your scene, it is essential to have at the very least a normal vector for each triangle. Failure to do this will cause certain parts of the model to be unlit, or lit in a peculiar way.

There really isn’t anything to calculating a face normal. Follow these steps:

1. First, you need two vectors in the plane of the triangle. Because you have three points that lie within the plane (the vertices of the triangle), it is easy to create two vectors. Going back to algebra or geometry, you might remember that a vector can be found by taking the terminal point minus the initial point. The points of the triangle are these points. The first vertex of the triangle (Vertex0) can be considered the initial point for both vectors, the second and third vertices are the terminal points of the first and second vectors (Vector1 and Vector2) respectively. Therefore, Vector1 will be Vertex1 minus Vertex0, and Vector2 will be Vertex2 minus Vertex0.

2. A vector normal to the triangle points straight up at a 90-degree angle to the triangle’s plane. Taking the cross product of the two vectors you just found will yield a single vector, orthogonal to the plane. It is important that you always take the cross product in the same order. The vector produced by crossing Vector1 with Vector2 (Vector1XVector2) is not the same as the vector produced by crossing them the other way (Vector2XVector1). The two opera­tions will produce vectors pointing in opposite directions. If some of your faces appear to be lit incorrectly, make sure you are

computing your vectors and cross products correctly. However, if all of your faces are lit incorrectly, try crossing your vectors in the opposite order. The normals may be pointing into the model, rather than out of the model, as they should be.

3. Before you send this vector to the renderer, be sure to convert it to a unit vector. Remember that a unit vector must have a magni­tude of one. To create a unit vector, you divide each individual component of the vector by the vector’s original magnitude. This can be done manually, or you can use the Normalize func­tion of the CVector3 class.

Take a look at this chapter’s first demo, found on the CD in the Code/ Chapter10/Normals/ directory, for a demonstration of face normals.

Calculating Vertex Normals

The other type of normal is called the vertex normal. Now, instead of one normal for each polygon, one normal is used for every vertex.

Why use vertex normals instead of face normals? The answer is simple—using vertex normals gives you a much nicer looking model. Instead of each polygon being flat-shaded, the lighting is now interpo­lated between the vertices, giving a nice smooth shade.

Figures 10.1 and 10.2 compare the visual difference between vertex and face normals. Quite a difference, eh?

Calculating vertex normals is not difficult. The first thing to do is calcu­late all of the face normals. Then, for each vertex, you must determine which faces share that vertex. Once you find all of them, add up all of those faces’ normals, and divide by the number of shared faces. This will give you a new unit vector, which is called the vertex normal.

//pseudocode to calculate vertex normals from face normals for each vertex for each face

if face contains vertex

add the normal of the face to the normals of any other faces that share the vertex to obtain the final normal for the current vertex

divide the vector calculated by adding all the face normals together by the number of faces sharing the vertex

Creating Your Own Format

Creating your own 3D model format can be very beneficial to your game. By “rolling your own,” you can include whatever data you need, from vertices to animations in whatever order and whatever form you want. You can also include other data such as textures or even game — specific data such as character dialogue. Best of all, you are not limited by the constraints of an existing format, and you can change the format to fit your needs as you go.

The first thing you must decide about your format is whether it will be stored in text or binary form. Both ways have advantages and disadvan­tages, discussed in the following sections.

Text-Based Format

If the format is stored in a text-based form, you sacrifice some storage space for readability. A text-based format will generally take up more room, due to each character using a full byte (“255” will take three bytes in text, only one in binary). Space may be a lower priority than, for instance, readability. Most text-based formats can be opened in a text editor and modifications can be made to the data without need-

ing a full-fledged editor. This makes it easy to do simple things such as change the textures on the model or even tweak vertex positions.

Another downside to text-based models shows up when you go to load them. Text files can be a real pain to parse, particularly if small mistakes such as an extra space, or a blank line, are inserted at an odd place.

Binary-Based Format

A binary format can solve many of these problems, but at the expense of readability. A binary-based format is generally easier to parse be­cause the size of each data type, such as a float or a short, is the same throughout the file. A floating-point binary value is always the same number of bytes, whereas in a text file, the same number could be any number of bytes, depending on the number of digits. Both text and binary will work; pick the one best suited for your needs.

Planning the File

The next thing you need to think over is what you want to include in your files. Here are a few questions you might ask yourself during this process:

■ How will the vertices be stored? Will you have a simple X, Y, Z floating-point triple? Perhaps each vertex will be one byte with a floating-point scaling and translation value for each mesh, much like MD2 does.

■ How will textures be handled? Will you simply store filenames within the model? You could embed the whole texture file into the model if you wanted, or even skip textures altogether and include only color and lighting data. Texture coordinates must be considered as well. Will you have only one set texture coordi­nate for each vertex, or will you need multiple sets for environ­ment or light mapping?

■ How are faces stored? Do you simply use triangles, or a combi­nation of triangles and quads, or just quads? Another possibil­ity is to store the model using n-gons, polygons with no set number of vertices, for each face. You can even get rid of polygons completely and store your model as a group of curved surfaces. If you choose to go with the triangles-only method, will the format be optimized for triangle strips and fans or just individual triangles?

~ГГ^~

What else will the face information contain? Obviously it must contain indexes into the vertex array, but what about material information, texture coordinates, or normals? All these things can be stored with the faces as well.

Will you use skeletal or keyframed animation? Or how about a combination of both? The animation adds a lot to a model. Skeletal animation is harder to implement (nearly impossible if you do not have a formal editor, or are not converting from another format).

For some applications you might not need any animation, prefer­ring to store only vertices, faces, and material information.

Will the model be singular or multi-part? Some formats like MD2 consist of only one mesh that defines the whole file, whereas other formats such as 3ds can contain multiple con­nected meshes. You can even define attachment points to connect other models like md3 does.

Planning on using extra “goodies”? Some model formats include advanced features such as normal maps, bump maps, or curved surfaces. These are unique structures and must be stored in a separate part of the file.

Will you be including extra information not directly affecting the model? Examples of this would be development information, copyright tags, and other game-related content such as character dialogue or AI information. Just because it’s a model file does not mean you can’t add whatever extra data you want. However, you will need to be careful about adding just any information. Before you go ahead and add information to a model file, ask yourself, “is it really necessary to add this, or it just bloat?” Adding unneeded information to model files, or any type of file for that matter, will take up extra space. You may need to stop and reconsider adding information that has little or no effect on what the audience of your game will see or perceive. If you need to keep the overall size of your game to a minimum, take care to keep from blowing up the file size on your models by adding trivial or useless informa­tion. Those few extra bytes could probably be used more effec­tively in another part of the game.

Do you plan on expanding your format later? The answer to this question could affect the design and layout of the format, even the way the format is stored. You can choose to make many assumptions about the model, such as the maximum number of

vertices, or you can choose to make no assumptions at all. If you want a very expandable format, you might lay it out like the 3ds format does, with chunks and sub-chunks. This approach would allow you to add chunks later in the process without requiring you to rewrite all of your code. On the other hand, if your format will stay the same throughout the development process, you can use a more concrete layout. Keep in mind that a format that makes more assumptions trades expandability and modular­ity for ease of use and more efficient file I/O, whereas a format that makes no assumptions swings the other way.

Last, will your models require any special treatment? For instance, if you want to stream models to gamers over the Internet or a network, you will need to take special consider­ations when designing and compressing your models in order to stream them as efficiently as possible. Other models may be required to be compatible with other parts of the game, such as scripts or shaders, that change the way they are displayed or control the way the models act or move. Be sure to check that out before designing your format. Nothing is worse than having to go back to the drawing table because you forgot to plan for one of these scenarios.

Now you need to work out how data will be stored in the file.

X, Y,Z (float)

Will you have a header at the start? What will it contain? It helps to sit down with a pen and paper and draw a diagram of your file structure, an example of which is shown in Figure 10.3.

Header

ID (‘TERM”)

NumVerts

NumKeyFrames

NumFaces

NumTextures

Keyframes

Texture Coordinates

Vertices

Indices (ushort) — Faces

Data

Textures

SHAPE * MERGEFORMAT

Calculating Face Normals

Figure 10.3 An example file structure. This chart shows the layout for a new file format. It shows what is included in the file, as well as the order. The datatype of each section is also shown; the vertices are made up of floats and the indexes are unsigned shorts.

Once you have that down exactly how you want it, it is time to create a way to make these new files.

Creating the Files

If your new format is very similar to an existing format, you can write a converter. A converter simply takes the data you want out of the original file and puts it into your new format, leaving behind anything you do not need, and adding any extra data such as normals or texture data.

If your format is slightly more bizarre, or too different from an existing format to easily write a converter, there is still an option before writing a complete editor. You can use an existing editor to export your new format. Many popular 3D editors offer packages that can be used to write your own import and export plug-ins. Some, such as 3D Studio Max, even have their own scripting language (MaxScript). Others have a software development kit that is used to write plug-ins. MilkShape 3D is an example of this. The MilkShape SDK is available free at the Chumbalum Soft site, as well as on the CD of this book.

Finally, if neither of these is an option, you can write your own editor, tailored specifically to your special format. Keep in mind that this can be a complicated, time-consuming process. I would definitely recommend checking out the other options before delving into writing your own editor.

The format I created in this section is similar to the MD2 format. However, I decided to leave out the optimization information, trim down the header, and embed texture data into the file.

This new format is now more suited to my application than MD2 was. Because I do not want people to be able to edit the skin on the model,

I simply embedded the image file into the model file. Also, because I have no desire to use the optimization information included in the MD2s, I simply got rid of it. Because it no longer exists in the file, I removed all trace of it from the file’s header as well. Because I added the image data to the file, I added a section of the header that will tell the program where to find this data as well.

I decided the easiest way to create my format would be to create an MD2 file and texture files first, and then write a converter to convert and combine them into one single file. Figure 10.4 shows the result.

If you look at the CD in the directory for this chapter (/Code/Chapter10/), you will see the converter I created, as well as some sample files and code to load this new format into your programs.

Calculating Face Normals

Figure 10.4 The brand new, never seen before. TERM format in action.

Linear Interpolation

Linear interpolation is the basis behind all keyframe animation. Although it has been discussed very briefly in previous chapters, this section looks at it more deeply in a general form.

Linear interpolation is one of the most useful game programming techniques. It is used to generate new frames in-between keyframes of traditional vertex-animated models. It can also be used to position an object between two end points, depending on the current time or other factors. If a monster is supposed to be at the end of a straight path after five seconds, you can use linear interpolation to determine where the monster should be at one second, two seconds, or any other time value. This allows you to move the monster along its path at a constant speed.

When you linearly interpolate between two points, you are finding a point on the line connecting the two. To find the desired point, you need three things—the ending point, the starting point, and an interpolation value. The interpolation value is a floating-point value between 0 and 1. If the interpolation value is 0, the result is the start­ing point; if it is 1, the result is the very end point; if it is somewhere in the middle, well, so is the result.

The general formula for doing this is as follows:

p0 + (p1 — p0)t

Where P0 is the starting point, P1 is the ending point and t is the interpolation value.

Take a look at the example in Figure 10.5.

t = 0.2 t = 0.9

t = 0.0 t = 0.5 t = 1.0

Figure 10.5 Interpolating along a line. The object is at the beginning of the line when t = 0 and the end of the line when t = 1. When t is any other value, the object is in between the endpoints.

The linear interpolation method most often used in games is called linear interpolation with respect to time. A time to get from point A to point B is provided, and at any given time, for instance every frame, the character or object must be drawn in the correct place. Linear interpo­lation is the lifesaver here. The only problem is calculating the current interpolation value.

First, you determine the amount of time that has elapsed since the object started moving. Then, to calculate the interpolation value, you take the elapsed time, divided by the total time the object should take to move the entire distance. For example, if 7.22 seconds have elapsed, and the object must reach the end in a total of 10.0 seconds, the interpola­tion value is 7.22/10.0 or 0.722. Be sure that your units on the elapsed time, and the total time are the same. If one is seconds, and the other is milliseconds for instance, the desired effect will not occur.

Take a look at Figure 10.6 for a picture of an object moving with respect to time.

This section also has a demo included on the CD; check it out in

/Code/Chapter10/Linear Interpolation/!

Optimization Tips

Part of the fun in game development is squeezing out those last few frames per second and cramming as much information, graphics, and data into your game as possible, while still staying within acceptable limits for size and speed. Here, you will learn about a few optimization tips you can use along with your 3D models.

■ Display Lists’. OpenGL contains a very useful feature known as display list. A display list holds compiled geometry. This is particu­larly good for static models because you only have to compile the list once and then you can display it many times. By using a display list, you can cut down the processing time tremendously. To use display lists in OpenGL, you should look into the following functions. glGenLists, glNewList, glCallList, and glDeleteLists.

■ Vertex Arrays;. Vertex arrays are another option for optimizing geometry. There are three types of vertex arrays in OpenGL.

The first is simply an array holding the vertices in the order they need to be rendered. After setting the vertex array information using glVertexPointer, you can use glDrawArrays to render the data. The second type of array is an extension of the first. Using functions such as glNormalPointer, glTexCoordPointer, and glColorPointer, it is possible to add normal, texture coordinate, and color data into the arrays as well. The third type is an in­dexed array. An indexed array is the same as a standard array with one exception. Instead of just running through the array from beginning to end, an array of indexes into the array is

used. The index array specifies the order in which the vertices, texture coordinates, colors, and normals are to be rendered. Using this approach, vertices can be reused, leading to a smaller array. The procedure is the same until you get to the glDrawArrays call. For an indexed array, glDrawArrays should be replaced with glDrawElements.

Compiled Vertex Arrays’. Although vertex arrays are fast, compiled vertex arrays are faster. Newer versions of OpenGL define an extension that allows you to compile your vertex arrays much like you compile a display list. The disadvantage here is that the data within the arrays cannot be modified without first unlock­ing the array, so compiled vertex arrays are best left to static objects. To use compiled vertex arrays, you should look into glLockArraysEXT and glUnLockArraysEXT as well as the functions for regular vertex arrays.

Conclusion

You now have a bag of tips and tricks that you can apply to almost any model format. You can calculate normals for models that do not include them within the file, enabling you to apply lighting to your 3D models and thus give them a more realistic look. You can also calculate where an object should be at a certain time using linear interpolation. This is especially useful when animating objects that use snapshots of the model to represent different positions. Using linear interpolation, you can create more snapshots to fill in the gaps between the originals and create a model that animates smoothly.

You also learned various techniques you can use to optimize your display and render code to increase the overall speed of your engine. This extra speed allows you to add more game-specific elements, or simply increase the frame rate when running the game.

Next up are the appendixes. In the appendixes, you will find a table that shows various file formats, along with the editors that create them. You will also find an introduction to STL vectors; useful if you need resizable arrays or if you need the special functions such as searching and sorting that STL vectors offer. You will also encounter a section that describes some of the paths that you may want to take in the future and suggests some Web sites and books for further reading.

Для любых предложений по сайту: [email protected]