Ethical and Legal Constraints

Copyright
Legally, copyright makes sure that nobody can copy somebody else's ideas within a set amount of time. It usually lasts for 70 years in the UK.
Image result for copyright

Disadvantages
In the game development industry, a disadvantage would be that copyrighting a full video game could need many individual copyrights, which would get very expensive. Things such as animations, sounds, and code require individual copyrights.

People could also steal your idea before you copyright it, and get a copyright on it themselves. This would stop the real author from being able to copyright it.

Advantages
Advantages would be that since the duration of a copyright is so long, you are almost guaranteed to have no one make a profit off your idea. In my 3D environment I could copyright the design and the individual objects to make sure no one can take my assets and use them in their game for a profit.


Ethical
Ethically, a video game needs to have warnings about using the game. For example, if the game contains flashing lights they will need to have an epilepsy warning.

Disadvantages
The disadvantages here would be that the developers could potentially lose customers.

Advantages
The advantages to giving warnings to users would be that they are less likely to get injured. If they do, they can't sue the developers because they were given a warning.

HA7 Task 6 - Constraints

File Size

File size of a 3D are mostly from the polygon count and vertex count. This is how many polygons and vertexes are in the model. A realistic third person game needs higher polygon counts than other games for the character since they will always be visible because of the third person view, and need to look detailed as it is a realistic art style.

A lower polygon count would be used on a first person game as the characters won't always be visible, so they don't need the extra detail.

Polygons and Triangles

When creating a model artists generally want to work with easy to use polygons such as quads(4 sided) as they are easy to use with edge loops. An artist would want to preserve the easy to use polygons for as long as possible to make creating a model as quick and easy as they can.

The problem with this is that game engines are better at rendering triangles. The models polygon count is an inaccurate representation of how many polygons there will actually be, since when the model is placed into the game it will be broken down into triangles and therefore more polygons.

Triangles and Vertex Counts

The vertex count is how many vertices(corners) are on a model.  The vertex count is more important than how many triangles are on the model in terms of memory and performance, however the two counts can be similar if all of the triangles are connected properly to each other. 1 triangle has 3 vertices, 2 connected triangles have 4 vertices, 3 connected triangles have 5 vertices and so on.

Seams in the UV maps, changes to shading groups, and material changes between each triangle are treated as physical breaks in the model's surface when it is rendered in the game. When these physical breaks occur the vertices have to be duplicated so that the model can be sent in render-able chunks to the GPU.

Using too many smoothing groups, too many materials, and over-splitting of UV maps are all causes to having too high of a vertex count. Having too high of a vertex count can slow down performance when the model transforms, and also increases how much memory is used up for the mesh.

Rendering Time

Real Time Rendering - This is for interactive media like games and simulations. All of the calculations are done quickly so the image can be displayed in a fraction of a second. Each image is called a frame, which display movement when chained together quickly. It is best to aim for at least 30 frames per second, but 60 or higher is best. The target is to have as high of a frame rate as possible to show as much information as quickly as possible, keeping the animation smooth.

Rendering software can simulate real life by adding visual effects like motion blur, depth of field and lens flare. Real time rendering is done using the GPU.

Non Real Time Rendering - This is for animations which aren't interactive, an example would be a film or TV show. Because non real time isn't trying to display the image quickly it can sacrifice time for better image quality, producing much more realistic or detailed animations. Each frame could take anywhere between a few seconds to a few days depending on the complexity of the scene. Once all of the frames have been rendered they are displayed quickly and, just like with real time rendering, the more frames displayed per second the smoother the animation is.

When photo realism is needed, ray tracing or radiosity can be used. Particle effects can be made to simulate weather or environmental effects like rain, fire, smoke or explosions. Volumetric sampling can be used to create atmospheric effects like fog. Caustics simulate light focusing by uneven light refracting surfaces. Subsurface scattering simulates the effect of light reflecting inside of a solid object.

Rendering times can be extremely long in some cases, and because of this a 'render farm' can be used to speed up the process by using lots of hardware to render images simultaneously rather than one at a time. However because of the increase in computer technology and lower prices for more powerful hardware rendering of complex scenes can be done on one computer system, and can even be done outside of a professional environment.

HA7 Task 5 - 3D Development Software

Blender

Blender is free to use and open source 3D modelling software. It supports modelling, rigging, animation, simulation, rendering, compositing, motion tracking, video editing and game creation.

Blender is cross-platform, including the popular Linux, Windows and Macintosh using OpenGL to provide a consistent interface experience across each platform.

The public can make additions to blender because it is open source. This allows for quick bug fixes and community wanted additions to be quickly created and implemented.

3D Studio Max

3D Studio Max can be used on the Windows operating system. It is used a lot for creating video, such as movie effects and TV, because it supports modelling, animating, particle systems, dynamic simulation, radiosity, global illumination, normal map creation and shaders. It also has a customisable user interface and has its own scripting language.

It is also used by game developers to create models and animations.

Maya

Autodesk Maya, commonly called Maya, is currently developed by Autodesk, but originally was made by Alias Systems Corporation. Maya is used to create interactive 3D applications such as video games, animations for film and TV, and visual effects.

Lightwave

Lightwave is used to create and render animated and static 3D images. It has a rendering engine with advanced features such as realistic reflection and refraction, radiosity and caustics.

When animating the user is given access to features such as reverse and forward kinematics for animation, particle systems and dynamics.

If the user is comfortable with programming they have access to the SDK offering Lscript (proprietary scripting language) and C language interfaces.

Cinema 4D

Developed by MAXON Computer GmbH of Friedrichsdorf, Germany and runs on Windows and Macintosh operating systems. Cinema 4D can use procedural and polygonal/subdivision modelling, animating, lighting, texturing and rendering as well as other commonly found features in 3D modelling software.

There are 4 variants of the application from MAXON. There is the 'Prime' version offering the basic features, a 'Broadcast' version with motion-graphic features, a 'Visualize' version which adds architectural designing features, and 'Studio' which has everything.

ZBrush

ZBrush is a digital sculpting program combining 3D with 2.5D modelling, texturing and painting. ZBrush uses proprietary technology to store information on lighting, colour, material, and depth for all objects on the screen. ZBrush is more like sculpting than other 3D modelling software.

It can be used to create very detailed models with up to 10 million polygons to be used in games, film, and animation. It is well known for being able to create high frequency details which were normally made using bump maps. The finished, very detailed mesh can be exported as a normal map to be used on a low polygon count version of the same model.

Sketchup

Sketchup is 3D modelling software usable in a variety of ways to fit a range of industries. It is useful in archecture, mechanical, film and video games. It has two versions, a free and a professional version.

Sketchup advertises its ease of use, and gives the user access to an online depository of free models. The user can download, edit and contribute to the free models accessed by what is called the 3D warehouse. It accommodates third-party plug in programs. This allows for new capabilities (near photo-realistic renders), and can enable placement of models in Google Earth.

HA7 Task 4 - Mesh Construction

Primitive Modelling

Primitive modelling is a common method of creating meshes. It is done by creating primitive meshes offered by the 3D modelling software being used. These primitives are joined together to create a more complex shape.

The 3D primitive meshes are cubes, spheres, cylinders, torus, pyramids and 2D primitives, such as circles and squares.

http://www.3dm3.com/forum/articles.php?action=viewarticle&artid=182
This gun was created using primitive shapes. You can see the outline of each primitive. The gun is mostly made out of re-sized cubes and cylinders. Other parts of the model have been made using primitive shapes and then being edited, such as the chamber and the stock.

Box Modelling

This technique is done by starting with a primitive, such as a cube, and using a subdividing tool to add new vertices and edges to the cube. These new vertices and edges can be manipulated as normal, and even subdivided again to get more polygons. Then by extruding, the shape can be made bigger and more detailed.
http://ibshart.blogspot.co.uk/2014/02/intro-to-blender-extruding-box-modeling.html
Above is a mesh which started as a primitive cube, and was slowly made more complicated by subdivision and extrusions until a human face was created. This example was built in blender, and the creator would use the mirror tool to create the other side of the head once the first side was finished, resulting in a perfectly symmetrical copy.

Extrusion Modelling

In extrusion modelling the developer would trace over an image of an object, creating more polygons to follow any detail in the reference image. This would create a 2D shape which matches the first image used. Once this is done a second image will be used, which shows the same object from a different angle. This image will be used to introduce the depth to the mesh, by extruding it across the depth axis until it matches the reference image.

https://www.packtpub.com/books/content/how-model-characters-head-blender-part-1
From the image above, you can see that on the left there is the 2D perspective of the reference image. This is using the X and Y coordinates in 3D space to create the front of the shape. On the right is the Z axis, or the depth axis. By moving the mesh through this axis the developer has created depth and is matching the mesh to the reference image's depth.

HA7 Task 3 - Geometric Theory

Geometric theory is how a developer can transform a 3D object using vertices, edges, and faces. This is done by first creating a basic mesh such as a plane, cube, or circle which can be edited and made into more complex shapes.

https://jcallisterdesign.wordpress.com/year-1/unit-66-3d-modelling/assignment-one/task-one/introduction-to-3d-modelling/explaining-geometric-theory/
The mesh is constructed in 3D space using 2D polygons such as the triangle above. To create depth polygons are joined together to create 3D objects, such as below.

http://www.real3dtutorials.com/tut00007.php
The Z axis above represents depth in 3D space, while the X and Y axis are what we use to measure 2D shapes. As shown above 6 2D square polygons are created next to each other, passing through the Z axis to create a 3D shape called a cube.

The cube is a primitive shape supplied by most 3D modeling software. Other primitives exist which can be quickly created by the software for manipulation by the developer. Common primitive shapes are: spheres, cubes, torus, cylinders and pyramids.

http://flylib.com/books/en/4.423.1.23/1/
Primitives are useful as a starting point in creating an object, sometimes offering many polygons which would take a long time to create manually, such as with a torus, cylinder or sphere.

The mesh is made more complex by manipulating the object's vertices, edges and faces and finally becomes a finished object to be implemented into a game world. Curves can be achieved by moving vertices through the 3D plane, which is useful for creating a smooth surface which isn't flat such as the ground.

This is necessary when using geometric theory to create a 3D environment because a game character is going to need to move across this surface, if it isn't smooth enough the surface will not only look wrong but the character will get stuck when moving across it.

This is important in all aspects of geometric theory, not just in curved objects. The appropriate mesh should be chosen at the beginning of development to reduce work load as well as give the developer a good starting point to work with. For example if someone wants to create a simple table they should start with a cube object, since it has 8 vertices. This shape can then be edited by reducing or increasing the size of he edges to create a rectangle. Then extrusions can be made to create the legs. This would be easier than starting with, for example, a circle.

http://www.cmap.polytechnique.fr/~peyre/geodesic_computations/
This mesh is has been created using geometric theory, going from left to right they become more advanced and detailed as more polygons are added. The mesh is made out of many triangles because a triangle is the most simple shape which can be used to create 3D objects. As more polygons are added the rabbit becomes more smooth, and smaller details ca be made out. The first rabbit barely has a tail for example, while the second and third have clearly visible tails.

A middle ground must be found between low polygon count and detail. Not having enough detail is bad since the rabbit doesn't look good. However having too many polygons is bad as it will take a lot of computational power to render it.

HA7 Task 2 - Displaying 3D Polygon Animations

How 3D Models are Displayed

The graphics processing unit can show 3D animation by rendering information such as textures, lighting and shadows to match a 3D environment created on a computer system.

a GPU will receive information from the CPU about what to render, then it will render this information using the two main ways of rendering, either rasterisation, or raytracing.

If rasterisation is used then a 3D triangle will be placed onto the 2D screen and working out which pixels it is covering. These pixels will then be set to the appropriate colour.

If raytracing is used then the GPU will trace the path of light through the pixels, stopping at the first piece of geometry and simulating the effects. Raytracing can produce much more realistic effects than rasterisation, but it costs a lot more computational power.

Because of this rasterisation is used for more speed reliant rendering, such as in video games. Raytracing can be used for rendering ahead of time, such as in film and TV.

API

An API (Application Programming Interface) is an interface a game engine can use to understand how to interact with an object, using the APIs protocols, routines and tools. An example of an API would be OpenGL for 3D graphics.

Operating Systems can provide an API for programmers to use, which is useful as this means all programs written with the API would have a similar understandable interface for end users. APIs are good for programmers as well since before they were available the graphical part of a program had to be rewritten for each operating system, and would have to be made sure it is compatible with most hardware.

Graphics Pipeline

The graphics pipeline is how the 3D information is converted into images and videos displayed on the monitor. There are many stages to the graphics pipeline.

The first stage is looking at the scene using geometric primitives. This is done using triangles as they can only ever be non-planar(they can only exist on one plane, they can't be 3D shapes). Anything else must be broken down into triangles to be understood and rendered.

Secondly the local coordinates of the object's transform is changed into the 3D world coordinate system.

Next the 3D world coordinate system is changed to the 3D camera coordinate system. For this the location of the camera acts as the origin.

Now the effects of lighting is calculated and shadows are formed. If a bright coloured object is in a room with no light it will appear dark to the camera, as it is in a shadow since the light has been calculated. Reflections are also calculated in this step.

The 3D coordinates are now shown as 2D through the view of the camera. If the view is orthographic then everything will appear to be on the same plane as they keep the same size regardless of distance from the camera. If the view is perspective then the objects 3D position is shown in the camera as objects further away become smaller. This is achieved by dividing the x and y coordinates of each vertex of each primitive by its z coordinate.

Next geometry which isn't inside the view of the camera will not be visible and will be ignored.

At this step the 2D image goes through the rasterisation process, where operations are done to each individual pixel.

Lastly colours are assigned to the image through either a texture saved in memory, or a shader program.

HA7 Task 1 - Applications of 3D

3D Being used in environments

Environments created in 3D are generally used for video games and film. They are expensive to create so they aren’t really used in TV series unless they have large budgets. They are also risky to use as if they don’t look real they will stand out and look especially bad.

Successful usage of 3D environments are therefore seen most in video games, or films created fully in CGI (computer generated images) where the environment won’t contrast with the actors.

In video games 3D environments are used in most games which are 3 dimensional, where the player can move in 3 dimensions. There are some cases where the game has 2D gameplay, but the environment is 3D, with foreground and background.

3D games can be created in game engines such as Unity or Unreal Engine. These kind of programs are the best things to use when creating 3D games. They offer the developers lots of options such as importing assets created in other 3D applications, and quickly testing the game during development.

3D assets can be created in programs such as Blender, or Maya, which are also used for other development projects as well as video games. With these programs it is easy to create an environment by modelling a variety of objects. For example, when creating a city environment a developer can create a few sections of a building and rearrange them, or combine them with other buildings to make a wide range of unique looking areas in the city.
Entire environments could be created in these programs and imported into game engines however this wouldn’t allow for easy editing, and the file size would be very large, so it would take a long time to load the file to actually edit anything in the first place.

3D Models

3D applications can be used in medicine for things such as 3D printing. A model can be printed and then used as a basis for creating new body parts, such as 3D printing a bone to instantly replace a damaged one. Printing equipment is also common as it makes getting equipment, which might normally be hard to create, easy.

oxford performance materials 3D printed skull implant
Some people have had the top half of their skulls replaced by perfectly fitting 3D printed ones. This makes 3D models useful for important physical purposes such as saving lives. The downside to this is that it may contradict the views of some religious beliefs, and those people who refuse to use the technology can’t benefit from it.

3D Product Design

Siemens
This image shows how engineers can use 3D modelling software. The engineer here is designing a car and how it functions. A mistake here which isn't found could end up causing a person driving this car to have a crash, or have to spend lots of money repairing a problem which isn't their fault. Therefore it is important to properly design a product in 3D before creating it.

3D in TV

3D in TV was used after it was introduced to film, however it was never used as much as regular TV since viewers had to constantly wear 3D glasses, and the price of 3D TVs was much higher than a regular TV.
There are other kinds of 3D techniques such as stereoscopic, multi-view, or 2D+depth.

3D in film

In James Cameron’s avatar 3D environments are used throughout the film to generate an alien world. The actors wear green suits which can be used like green screens to create the alien characters in CGI.

Image result for avatar

There is also the kind of 3D which appears to move out of the screen, which usually needs the viewer to wear 3D glasses. This is done by moving the red colour slightly to the side of the image, and when combined with 3D glasses the image’s colour is corrected but also appears to come out of the screen.
As an example which I made of an image of Crash Bandicoot:



3D on the web

There is a project called Web3D aimed at fully displaying websites and allowing navigation of them in 3D. Now, Web3D is a term used to describe all interactive 3D content shown through web browsers by embedding the content into web pages html. Modern web pages using 3D are normally powered by WebGL, but it isn’t used much.


This website has a 3D representation of the human body, it can be moved around in 3D space and interacted with to learn about anatomy.

3D in games

In the games industry 3D is used to create worlds where the player can move in 3 dimensions. This is for reasons such as immersion, as life is also in 3 dimensions, or for creating certain game play experiences and mechanics.
3D character model - knight templar by MacX85
This is a 3D model for a character in a game, it looks realistic in game compared to a 2D character. 3D games with historic characters like this are useful in experiencing the past in a more realistic way than a 2D game would be.

Games can also be 3D by using 2D sprites in a certain way, such as in the original doom. This method is called 2.5D, it is different to 3D because there is no z axis to move through depth.
Image result for original doom

3D in education

3D can help education because it adds more detail to the topic being learnt. For example when learning anatomy, architecture, chemical reactions, etc. the students can interact with the 3D models and learn about them by interacting with them. For example they can test out building structures inside a simulation or observe a chemical reaction on the molecular level.
3D printing can also be used for teaching as models can be created and given to students for quick iterations of 3D development in a variety of classes such as engineering, architecture, or art.


STL files

3D architectural walkthroughs

A building with a complex pattern like this would need a lot of testing to make sure it is structurally safe and viable to be built. Testing it in a 3D virtual environment would be perfect to test this and plan out how it would be built.

#3d #Printed architectural model. Start posting your 3d prototype now at: http://www.mylocal3dprinting.com

They can also be useful for engineers in a similar way, as testing can be done in a 3D environment, and perfect sizes can be measured before being 3D printed to maintain the perfect measurements. Without these applications mistakes are much more likely because of human error. Depending on what is being made this could be dangerous without virtual testing.